diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Catia V5 R20 64 Bit Crack The Ultimate Solution for Your Engineering Projects.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Catia V5 R20 64 Bit Crack The Ultimate Solution for Your Engineering Projects.md
deleted file mode 100644
index 9b90655b3d1d98261582716f3d20c931527f63e7..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Catia V5 R20 64 Bit Crack The Ultimate Solution for Your Engineering Projects.md
+++ /dev/null
@@ -1,118 +0,0 @@
-
-
Catia V5 R20 64 Bit Crack: What You Need to Know
-
If you are looking for a way to use one of the most popular and powerful software for Computer Aided Design, Engineering, Analysis, Manufacturing and Production (CAD/CAM/CAE), you might have heard of Catia V5 R20. This is a comprehensive program that allows you to design, analyze, and produce products in various industries, such as aerospace, automotive, shipbuilding, consumer goods, and more. However, using Catia V5 R20 is not cheap or easy. You need to pay a license fee to access its full features, and you need to activate it online with a valid serial number. This can be a hassle for some users who want to use Catia V5 R20 without spending too much money or time.
-
That's why some people look for a crack for Catia V5 R20. A crack is a file that modifies or bypasses the original software's security features, allowing you to use it without paying or activating it. A crack can be a patch, a keygen, or a loader that changes the software's code or behavior. In this article, we will show you how to download and install Catia V5 R20 64 bit crack on your Windows computer. We will also discuss the benefits and risks of using a crack for Catia V5 R20, as well as some alternatives that you can consider.
The first step to use Catia V5 R20 with a crack is to find and download the crack file from a reliable source. There are many websites that claim to offer cracks for various software, but not all of them are trustworthy. Some of them may contain malware or viruses that can harm your computer or steal your personal information. Some of them may also provide fake or outdated cracks that do not work or cause errors. Therefore, you need to be careful when choosing where to download the crack file from.
-
One possible source that we found is this blog post that provides a link to download Catia P2 V5R20 with a crack included. According to the post, this is a full offline installer setup of Catia P2 V5R20 that works perfectly fine without any problem. The post also provides instructions on how to install the program and apply the crack. However, we cannot guarantee the safety or validity of this source, so you should use it at your own risk.
-
To download the crack file from this source, you need to follow these steps:
-
-
Click on the green "Direct Download" button on the blog post.
-
Wait for the download to start automatically.
-
Save the file on your computer.
-
Extract the file using WinRAR or WinZip.
-
-
How to Install Catia V5 R20 64 Bit Crack
-
The next step is to install Catia P2 V5R20 on your computer using the crack file. Before you do that, you need to make sure that your system meets the minimum requirements for running Catia P2 V5R20. According to this website , these are the system requirements:
-
-
Operating System: Windows 7/8/8.1/10
-
Memory (RAM): 4 GB of RAM required.
-
Hard Disk Space: 5 GB of free space required for full installation.
-
Processor: Intel Pentium i3, Multi-core GHz or higher.
-
-
You also need to disable your antivirus and firewall before installing Catia P2 V5R20 with a crack. This is because some antivirus programs may detect the crack file as a threat and delete it or block its execution. To disable your antivirus and firewall, you can follow these steps for Windows Defender Firewall or these steps for Microsoft Defender Antivirus.
-
After disabling your antivirus and firewall, you can install Catia P2 V5R20 with a crack by following these steps:
-
How to install Catia V5 R20 64 Bit Crack on Windows 10
-Catia V5 R20 64 Bit Crack download link
-Catia V5 R20 64 Bit Crack free trial
-Catia V5 R20 64 Bit Crack license key generator
-Catia V5 R20 64 Bit Crack tutorial pdf
-Catia V5 R20 64 Bit Crack system requirements
-Catia V5 R20 64 Bit Crack vs Catia V6
-Catia V5 R20 64 Bit Crack features and benefits
-Catia V5 R20 64 Bit Crack online course
-Catia V5 R20 64 Bit Crack review and rating
-Catia V5 R20 64 Bit Crack alternatives and competitors
-Catia V5 R20 64 Bit Crack price and discount
-Catia V5 R20 64 Bit Crack support and customer service
-Catia V5 R20 64 Bit Crack activation code and serial number
-Catia V5 R20 64 Bit Crack error and troubleshooting
-Catia V5 R20 64 Bit Crack update and patch
-Catia V5 R20 64 Bit Crack tips and tricks
-Catia V5 R20 64 Bit Crack best practices and standards
-Catia V5 R20 64 Bit Crack comparison and benchmark
-Catia V5 R20 64 Bit Crack pros and cons
-Catia V5 R20 64 Bit Crack forum and community
-Catia V5 R20 64 Bit Crack case study and success story
-Catia V5 R20 64 Bit Crack FAQ and Q&A
-Catia V5 R20 64 Bit Crack video and audio
-Catia V5 R20 64 Bit Crack blog and article
-Catia V5 R20 64 Bit Crack ebook and guide
-Catia V5 R20 64 Bit Crack webinar and workshop
-Catia V5 R20 64 Bit Crack software and hardware
-Catia V5 R20 64 Bit Crack tools and resources
-Catia V5 R20 64 Bit Crack simulation and animation
-Catia V5 R20 64 Bit Crack design and modeling
-Catia V5 R20 64 Bit Crack engineering and analysis
-Catia V5 R20 64 Bit Crack manufacturing and production
-Catia V5 R20 64 Bit Crack testing and validation
-Catia V5 R20 64 Bit Crack optimization and improvement
-Catia V5 R20 64 Bit Crack integration and interoperability
-Catia V5 R20 64 Bit Crack collaboration and communication
-Catia V5 R20 64 Bit Crack documentation and reporting
-Catia V5 R20 64 Bit Crack customization and configuration
-Catia V5 R20 64 Bit Crack security and privacy
-Catia V5 R20 64 Bit Crack backup and recovery
-Catia V5 R20 64 Bit Crack migration and upgrade
-Catia V5 R20 64 Bit Crack compatibility and performance
-Catia V5 R20 64 Bit Crack quality and reliability
-Catia V5 R20 64 Bit Crack innovation and creativity
-Catia V5 R20 64 Bit Crack fun and entertainment
-Catia V5 R20 64 Bit Crack challenge and opportunity
-Catia V5 R20 64 Bit Crack learning and development
-Catia V5 R20 64 Bit Crack career and growth
-
-
Open the installation folder after extracting the file and click on the installation file.
-
Accept the terms and conditions and follow the instructions on the screen.
-
Select the destination folder where you want to install Catia P2 V5R20.
-
Wait for the installation process to complete.
-
Open the Mechdesign_Fix folder inside the extracted file and copy the JS0GROUP.dll file.
-
Paste the JS0GROUP.dll file into the installation folder where you installed Catia P2 V5R20.
-
Replace the original JS0GROUP.dll file with the copied one.
-
Run Catia P2 V5R20 from your desktop shortcut or start menu.
-
-
Benefits of Using Catia V5 R20 64 Bit Crack
-
By using Catia P2 V5R20 with a crack, you can enjoy some benefits that may not be available if you use the original software with a license. Here are some of them:
-
-
You can access all the features and functions of Catia P2 V5R20 without paying any license fee or activation code. This can save you a lot of money in the long run.
-
You can use Catia P2 V5R20 without any online connection or verification process. This can save you time and hassle when you want to use it anytime and anywhere.
-
You can explore different design, analysis, and production environments in Catia P2 V5R20 that are suitable for various industries and applications. You can also integrate them with other tools and processes for better results.
-
-
Risks of Using Catia V5 R20 64 Bit Crack
-
However, using Catia P2 V5R20 with a crack also comes with some risks that you should be aware of. Here are some of them:
-
-
You may face legal consequences for violating the intellectual property rights of Dassault Systèmes, the developer of Catia P2 V5R20. They may sue you for damages or take other actions against you if they find out that you are using their software illegally.
-
You may expose your computer to security threats from malware or viruses that may be hidden in the crack file or the source website. These malicious programs may damage your system, steal your data, or compromise your privacy.
-
You may experience performance issues or errors from using an unstable or outdated crack file that does not match the latest version of Catia P2 V5R20. You may also miss out on new features, updates, or bug fixes that are available for the original software.
-
-
Alternatives to Catia V5 R20 64 Bit Crack
-
If you are not comfortable with using Catia P2 V5R20 with a crack, or if you want to avoid the risks associated with it, you can consider some alternatives that may suit your needs better. Here are some of them:
-
-
You can use a free trial version of Catia P2 V5R20 that allows you to use the software for a limited time without paying or activating it. You can download the free trial version from this website. However, you may not be able to access all the features or save your work after the trial period expires.
-
You can use a student or academic version of Catia P2 V5R20 that is available for students and teachers who want to learn or teach the software. You can get the student or academic version from this website. However, you may need to provide proof of your eligibility and agree to some terms and conditions before using it.
-
You can use a similar but cheaper or free CAD/CAM/CAE tool that offers comparable functions and features to Catia P2 V5R20. Some examples are Fusion 360, FreeCAD, SolidWorks, and NX. However, you may need to learn a new interface or workflow and deal with compatibility issues when switching to a different tool.
-
-
Conclusion
-
In conclusion, Catia V5 R20 64 bit crack is a file that allows you to use Catia P2 V5R20 without paying or activating it. It can provide some benefits such as saving money and time and accessing a powerful and comprehensive CAD/CAM/CAE tool. However, it also has some risks such as legal consequences, security threats, and performance issues. Therefore, you should weigh the pros and cons carefully before deciding whether to use it or not. Alternatively, you can consider some other options such as using a free trial version, a student or academic version, or a similar but cheaper or free tool.
-
FAQs
-
-
What is Catia V5 R20? Catia V5 R20 is a comprehensive program for Computer Aided Design, Engineering, Analysis, Manufacturing and Production (CAD/CAM/CAE) that allows you to design, analyze, and produce products in various industries.
-
What is a crack? A crack is a file that modifies or bypasses the original software's security features, allowing you to use it without paying or activating it.
-
How to download and install Catia V5 R20 64 bit crack? To download and install Catia V5 R20 64 bit crack, you need to find and download the crack file from a reliable source, disable your antivirus and firewall, install Catia P2 V5R20 on your computer, and copy and paste the crack file into the installation folder.
-
What are the benefits of using Catia V5 R20 64 bit crack? The benefits of using Catia V5 R20 64 bit crack are that you can access all the features and functions of Catia P2 V5R20 without paying any license fee or activation code, use Catia P2 V5R20 without any online connection or verification process, and explore different design, analysis, and production environments in Catia P2 V5R20.
-
What are the risks of using Catia V5 R20 64 bit crack? The risks of using Catia V5 R20 64 bit crack are that you may face legal consequences for violating the intellectual property rights of Dassault Systèmes, expose your computer to security threats from malware or viruses that may be hidden in the crack file or the source website, and experience performance issues or errors from using an unstable or outdated crack file that does not match the latest version of Catia P2 V5R20.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Codigo De Activacion De Video Repair 16.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Codigo De Activacion De Video Repair 16.md
deleted file mode 100644
index 56ec7b6c26febd164a1629ba9f43d16f5bcdd706..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Codigo De Activacion De Video Repair 16.md
+++ /dev/null
@@ -1,139 +0,0 @@
-
-
Codigo De Activacion De Video Repair 16: How to Fix Your Corrupted Videos
-
Have you ever encountered a situation where your videos are corrupted and you can't play them on your computer or mobile device? Maybe you have recorded some precious moments with your family or friends, but the videos are damaged due to virus infection, power failure, improper operation, or other reasons. Or maybe you have downloaded some videos from the internet, but they are incomplete or broken. How frustrating is that?
Don't worry, there is a solution for you. In this article, we will introduce you to a powerful tool called Video Repair 16, which can help you fix your corrupted videos in a few simple steps. We will also show you how to get a codigo de activacion for Video Repair 16, which is required to activate the full version of the program. And we will share some tips and tricks for using Video Repair 16 effectively. So, let's get started!
-
Introduction
-
What is Video Repair 16?
-
Video Repair 16 is a professional video repair software that can repair various types of video corruption issues, such as video not playing, video freezing, video stuttering, video out of sync, video pixelated, video distorted, video black screen, and more. It supports repairing videos in various formats, such as MP4, MOV, AVI, MKV, FLV, WMV, etc. It also supports repairing videos from different sources, such as cameras, camcorders, drones, smartphones, memory cards, hard drives, etc.
-
Video Repair 16 has two repair modes: quick repair and advanced repair. The quick repair mode can fix most common video corruption issues by analyzing and repairing the video data. The advanced repair mode can fix more complex video corruption issues by using a sample video file as a reference. The sample video file should be from the same device and in the same format as the corrupted video file.
-
Why do you need a codigo de activacion for Video Repair 16?
-
Video Repair 16 is a paid software that offers a free trial version for users to test its features and performance. However, the free trial version has some limitations. For example, it can only repair up to three videos at a time, and it can only save up to one-third of each repaired video. To unlock the full functionality of Video Repair 16 and repair unlimited videos without any restrictions, you need to purchase a codigo de activacion for Video Repair 16.
-
Como obtener el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion gratis
-Donde encontrar el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion full
-Solucionar error de codigo de activacion de video repair 16
-Video repair 16 codigo de activacion crack
-Descargar codigo de activacion de video repair 16
-Video repair 16 codigo de activacion online
-Generar codigo de activacion de video repair 16
-Video repair 16 codigo de activacion serial
-Requisitos para el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion licencia
-Funcionamiento del codigo de activacion de video repair 16
-Video repair 16 codigo de activacion keygen
-Tutorial para el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion mega
-Ventajas del codigo de activacion de video repair 16
-Video repair 16 codigo de activacion original
-Alternativas al codigo de activacion de video repair 16
-Video repair 16 codigo de activacion premium
-Pasos para el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion valido
-Beneficios del codigo de activacion de video repair 16
-Video repair 16 codigo de activacion windows
-Caracteristicas del codigo de activacion de video repair 16
-Video repair 16 codigo de activacion youtube
-Consejos para el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion zip
-Dudas sobre el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion zippyshare
-Opiniones sobre el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion mediafire
-Preguntas frecuentes sobre el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion uptobox
-Testimonios sobre el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion rapidgator
-Problemas con el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion turbobit
-Garantia del codigo de activacion de video repair 16
-Video repair 16 codigo de activacion uploaded
-Soporte para el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion nitroflare
-Oferta del codigo de activacion de video repair 16
-Video repair 16 codigo de activacion filefactory
-Comparativa del codigo de activacion de video repair 16
-Video repair 16 codigo de activacion depositfiles
-Valoracion del codigo de activacion de video repair 16
-Video repair 16 codigo de activacion hitfile
-Experiencia con el codigo
-
A codigo de activacion for Video Repair 16 is a unique code that is generated after you buy a license for the software. It is used to verify your identity and activate your copy of Video Repair 16. Once you enter your codigo de activacion in the program, you can enjoy all the benefits of Video Repair 16.
-
How to get a codigo de activacion for Video Repair 16?
-
To get a codigo de activacion for Video Repair 16, you need to follow these steps:
-
-
Visit the official website of Video Repair 16 and click on the "Buy Now" button.
-
Select the license type that suits your needs. You can choose between a one-year license and a lifetime license.
-
Enter your personal information and payment details and complete the order.
-
Check your email inbox for a confirmation email from Video Repair 16. The email will contain your codigo de activacion and a download link for the software.
-
Download and install Video Repair 16 on your computer.
-
-
How to use Video Repair 16 to fix your corrupted videos
-
Step 1: Download and install Video Repair 16
-
If you have already downloaded and installed Video Repair 16 on your computer, you can skip this step. If not, you can follow these steps:
-
-
Click on the download link in the confirmation email from Video Repair 16 or visit the official website of Video Repair 16 and click on the "Download" button.
-
Save the setup file on your computer and run it.
-
Follow the instructions on the screen to complete the installation process.
-
-
Step 2: Launch Video Repair 16 and enter your codigo de activacion
-
If you have already entered your codigo de activacion in Video Repair 16, you can skip this step. If not, you can follow these steps:
-
-
Launch Video Repair 16 on your computer.
-
Click on the "Register" button at the top right corner of the main interface.
-
Enter your email address and codigo de activacion in the pop-up window and click on "Activate".
-
A message will appear confirming that your activation is successful.
-
-
Step 3: Add the corrupted videos to the program
-
To add the corrupted videos to Video Repair 16, you can follow these steps:
-
-
Click on the "Add" button at the bottom left corner of the main interface.
-
Browse your computer or external device and select the corrupted videos that you want to repair.
-
Click on "Open" to import them to the program.
-
You can also drag and drop the corrupted videos directly to the program.
-
-
Step 4: Choose the repair mode and start the repair process
-
To choose the repair mode and start the repair process in Video Repair 16, you can follow these steps:
-
-
Select one or more corrupted videos that you want to repair from the list.
-
Click on "Repair" at the bottom right corner of the main interface.
-
A pop-up window will appear asking you to choose between quick repair and advanced repair. You can select either option depending on your situation.
-
If you choose quick repair, click on "OK" to start repairing your videos immediately.
-
If you choose advanced repair, click on "OK" and then click on "Folder" icon next to each corrupted video to add a sample video file as a reference. Then click on "Repair" again to start repairing your videos.
-
-
Step 5: Preview and save the repaired videos
-
To preview and save the repaired videos in Video Repair 16, you can follow these steps:
-
-
After repairing your videos successfully with either quick repair or advanced repair mode ,you will see them listed under "Repaired Files".
-
You can click on each repaired video file name or thumbnail image to preview it in a built-in media player window.
-
You can also check some information about each repaired video file such as format ,size ,duration ,and resolution under "File Information".
-
If you are satisfied with the results ,you can click on "Save All" at bottom right corner of main interface .
-
A pop-up window will appear asking you to choose a destination folder where you want to save your repaired videos .You can browse your computer or external device and select a folder .Then click on "Save" .
-
Your repaired videos will be saved in selected folder .You can access them anytime .
-
-
Tips and tricks for using Video Repair 16
-
Tip 1: Backup your videos before repairing them
- can also use a cloud service such as Google Drive ,Dropbox ,or OneDrive to backup your videos online .
-
Tip 2: Use the advanced repair mode for severely corrupted videos
-
If your videos are severely corrupted and the quick repair mode cannot fix them ,you can try the advanced repair mode .The advanced repair mode can repair more complex video corruption issues by using a sample video file as a reference .The sample video file should be from the same device and in the same format as the corrupted video file .For example ,if your corrupted video file is a MP4 file recorded by your iPhone ,you should use another MP4 file recorded by your iPhone as a sample video file .The sample video file should also be healthy and playable .The advanced repair mode will use the information from the sample video file to repair the corrupted video file .
-
Tip 3: Contact the customer support if you encounter any problems
-
If you encounter any problems while using Video Repair 16 ,such as activation issues ,repairing errors ,or saving failures ,you can contact the customer support team of Video Repair 16 for help .You can send an email to support@videorepair16.com or visit the official website of Video Repair 16 and click on "Contact Us" .You can also check the FAQ section on the website for some common questions and answers .The customer support team of Video Repair 16 is friendly and professional ,and they will try their best to solve your problems as soon as possible .
-
Conclusion
-
In conclusion ,Video Repair 16 is a powerful and easy-to-use video repair software that can help you fix your corrupted videos in a few simple steps .It supports repairing videos in various formats and from different sources .It also offers two repair modes :quick repair and advanced repair .To use Video Repair 16 ,you need to get a codigo de activacion for Video Repair 16 first ,which you can buy from the official website of Video Repair 16 .Then you can follow the steps we have shown you in this article to add ,repair ,preview ,and save your corrupted videos .We hope this article has helped you understand how to use Video Repair 16 and how to get a codigo de activacion for Video Repair 16 .If you have any questions or feedback ,please feel free to leave a comment below or contact us via email .Thank you for reading!
-
FAQs
-
Here are some frequently asked questions about Video Repair 16 and codigo de activacion for Video Repair 16 :
-
-
Q: How much does a codigo de activacion for Video Repair 16 cost?
-
A: A codigo de activacion for Video Repair 16 costs $49.95 for a one-year license and $69.95 for a lifetime license. You can pay with PayPal, credit card, debit card, or other payment methods.
-
Q: How long does it take to receive my codigo de activacion for Video Repair 16 after I place an order?
-
A: You will receive your codigo de activacion for Video Repair 16 instantly via email after you complete your payment. Please check your email inbox and spam folder for the confirmation email from Video Repair 16.
-
Q: Can I use my codigo de activacion for Video Repair 16 on multiple computers?
-
A: No, you can only use your codigo de activacion for Video Repair 16 on one computer. If you want to use it on another computer, you need to deactivate it from the first computer and activate it on the second computer.
-
Q: What if I lose my codigo de activacion for Video Repair 16 or forget to deactivate it from my old computer?
-
A: If you lose your codigo de activacion for Video Repair 16 or forget to deactivate it from your old computer, you can contact the customer support team of Video Repair 16 and provide them with your order number and email address. They will help you retrieve your codigo de activacion or reset your activation status.
-
Q: Does Video Repair 16 guarantee to fix all corrupted videos?
-
A: No, Video Repair 16 does not guarantee to fix all corrupted videos. Some videos may be too damaged or corrupted beyond repair. However, Video Repair 16 has a high success rate in repairing most common video corruption issues. You can try it for free before buying it to see if it works for your videos.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/3dmgame Dll Mediafire 88.md b/spaces/1gistliPinn/ChatGPT4/Examples/3dmgame Dll Mediafire 88.md
deleted file mode 100644
index c1b914d5f8d1daf52beaa585ed34b85d15de0ad7..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/3dmgame Dll Mediafire 88.md
+++ /dev/null
@@ -1,45 +0,0 @@
-
-
How to Download and Fix 3dmGameDll.dll Errors for Free
-
If you are looking for a way to download and fix 3dmGameDll.dll errors for free, you have come to the right place. 3dmGameDll.dll is a dynamic link library file that is used by some popular games such as Mad Max, Metal Gear Solid V: The Phantom Pain, and Watch Dogs. This file contains important functions and data that the games need to run properly. However, sometimes this file can get corrupted, deleted, or misplaced, causing various problems such as crashes, freezes, or error messages.
In this article, we will show you how to download and fix 3dmGameDll.dll errors for free using simple methods. We will also explain what causes these errors and how to prevent them in the future. Follow the steps below to get started.
-
What Causes 3dmGameDll.dll Errors?
-
There are many possible causes of 3dmGameDll.dll errors, but some of the most common ones are:
-
-
The game installation is incomplete or damaged.
-
The game has been updated or patched incorrectly.
-
The game has been cracked or modified by third-party software.
-
The 3dmGameDll.dll file is missing or overwritten by another version.
-
The 3dmGameDll.dll file is infected by malware or viruses.
-
The Windows registry is corrupted or contains invalid entries.
-
The system drivers are outdated or incompatible.
-
-
These causes can lead to various symptoms such as:
-
-
The game fails to launch or crashes randomly.
-
The game runs slowly or lags.
-
The game displays error messages such as "The program can't start because 3dmGameDll.dll is missing from your computer. Try reinstalling the program to fix this problem." or "3dmGameDll.dll not found."
-
The game has graphical glitches or audio issues.
-
-
How to Download and Fix 3dmGameDll.dll Errors for Free?
-
There are several ways to download and fix 3dmGameDll.dll errors for free, depending on the cause and severity of the problem. Here are some of the most effective methods:
-
-
Method 1: Reinstall the Game
-
The easiest and most reliable way to fix 3dmGameDll.dll errors is to reinstall the game that is causing the problem. This will ensure that all the game files are intact and up-to-date, including the 3dmGameDll.dll file. To reinstall the game, follow these steps:
-
-
Uninstall the game from your computer using the Control Panel or the game's uninstaller.
-
Delete any leftover files and folders related to the game from your hard drive.
-
Download the latest version of the game from its official website or a trusted source.
-
Install the game on your computer following the instructions on the screen.
-
Launch the game and check if the error is resolved.
-
-
Method 2: Download and Replace the 3dmGameDll.dll File
-
If reinstalling the game does not work, you can try downloading and replacing the 3dmGameDll.dll file manually. This can help if the file is missing or corrupted on your system. To download and replace the 3dmGameDll.dll file, follow these steps:
-
-
Go to a reputable website that offers free .dll file downloads, such as DLLme.com.
-
Search for "3dmGameDll.dll" and select the version or variant that matches your game and system specifications.
-
Click on "Download" and save the file to your computer.
-
Locate the folder where your game is installed on your hard drive (usually C:\Program Files (x86) or C:\Program Files).
-
Find and rename the existing 3dmGameDll.dll file (if any) to something else, such as "3dmGameDll_old.dll".
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/A Pdf Content Splitter 4.8.4 [HOT] Keygen For 14.md b/spaces/1gistliPinn/ChatGPT4/Examples/A Pdf Content Splitter 4.8.4 [HOT] Keygen For 14.md
deleted file mode 100644
index 9d7b0e6e5c925a4374e88d7b638d22ef723b1cbc..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/A Pdf Content Splitter 4.8.4 [HOT] Keygen For 14.md
+++ /dev/null
@@ -1,39 +0,0 @@
-
-
A-PDF Content Splitter 4.8.4: A Powerful Tool to Split PDF Files by Content
-
If you have ever dealt with large PDF files that contain multiple documents or sections, you know how hard it can be to manage them. You may want to extract some pages, rearrange them, or save them as separate files for easier sharing or printing. But how can you do that without spending hours on manual work or buying expensive software?
-
Fortunately, there is a solution: A-PDF Content Splitter 4.8.4. This is a user-friendly and affordable PDF tool that allows you to split PDF files into smaller documents based on specific content on their pages. You can set up rules to define how to split your PDFs by unique text, find text, or word position. You can also customize the output names and properties of the split files, and even set up hot directories to automate the splitting process.
With A-PDF Content Splitter 4.8.4, you can easily manage your PDF content and save time and money. Whether you need to split invoices, reports, contracts, manuals, or any other PDF documents, A-PDF Content Splitter 4.8.4 can handle it with ease and accuracy.
-
Here are some of the features and benefits of A-PDF Content Splitter 4.8.4:
-
-
It supports three different split methods: by unique text, by find text, and by word position.
-
It allows you to edit and reuse the split rules for different files.
-
It offers flexible tags to trigger the split and customize the output names.
-
It can work with command line and trigger another program to process the split files.
-
It is compatible with Windows XP/Vista/7/8/10.
-
It is affordable and offers a free trial version.
-
-
If you want to learn more about A-PDF Content Splitter 4.8.4, you can visit their website[^1^] or download it from Softpedia[^2^]. You can also check out some other online PDF tools such as Adobe Acrobat[^4^] that can help you split PDF files by pages.
-
-
How to use A-PDF Content Splitter 4.8.4
-
Using A-PDF Content Splitter 4.8.4 is very easy and intuitive. You just need to follow these simple steps:
-
-
Select the PDF files that you want to split.
-
Select a split rule to apply. You can choose from the predefined rules or create your own.
-
Click the "Split all" button and wait for the program to finish.
-
Check the output folder and enjoy your split PDF files.
-
-
You can also use the hot directory feature to automatically split any PDF files that are placed in a specific folder. You just need to set up the hot directory, the split rule, and the output folder, and A-PDF Content Splitter 4.8.4 will do the rest for you.
-
-
Why choose A-PDF Content Splitter 4.8.4
-
A-PDF Content Splitter 4.8.4 is a powerful and reliable PDF tool that can help you split your PDF files by content in a fast and accurate way. Here are some of the reasons why you should choose A-PDF Content Splitter 4.8.4 over other PDF splitters:
-
-
-
It can split PDF files by content, not just by pages. This means that you can extract specific sections or documents from your PDF files based on the text they contain.
-
It can handle large and complex PDF files with ease. It can split PDF files that have hundreds or thousands of pages without any problems.
-
It can save you time and money. You don't need to spend hours on manual work or buy expensive software to split your PDF files by content. A-PDF Content Splitter 4.8.4 can do it for you in minutes and at a fraction of the cost.
-
It offers a free trial version. You can download and try A-PDF Content Splitter 4.8.4 for free before you decide to buy it. You can test its features and performance and see if it meets your needs.
-
-
A-PDF Content Splitter 4.8.4 is a must-have tool for anyone who works with PDF files on a regular basis. It can help you manage your PDF content more efficiently and effectively.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/60 Lakh - The New Punjabi Hit by Bukka Jatt and R Nait.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/60 Lakh - The New Punjabi Hit by Bukka Jatt and R Nait.md
deleted file mode 100644
index 3000cec700e605c7cf1b56e0542550ea83b19a7f..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/60 Lakh - The New Punjabi Hit by Bukka Jatt and R Nait.md
+++ /dev/null
@@ -1,109 +0,0 @@
-
-
How to Download 60 Lakh Song from DJ Punjab
-
Punjabi music is one of the most popular genres of music in India and across the world. It has a rich and diverse history, culture, and style that appeals to people of all ages and backgrounds. One of the latest hits in Punjabi music is 60 Lakh, a song by Bukka Jatt featuring R Nait. The song was released in September 2021 and has gained over 4.9 million views on YouTube as of October 2021. The song is a catchy and upbeat track that celebrates the success and lifestyle of the singers.
If you are a fan of Punjabi music and want to download 60 Lakh song to your device, you might be wondering where to find it. One of the best websites for Punjabi songs download is DJ Punjab. DJ Punjab is a popular online platform that offers a huge collection of Punjabi songs, albums, videos, and more. You can find songs from various genres, artists, and eras on DJ Punjab. You can also download songs in different formats and qualities according to your preference.
-
Downloading songs from DJ Punjab has many benefits. You can enjoy your favorite Punjabi songs offline without any internet connection. You can also save your data and storage space by choosing the optimal file size and quality. You can also create your own playlists and share them with your friends and family. In this article, we will show you how to download 60 Lakh song from DJ Punjab in simple steps. We will also give you some tips and tricks for downloading songs from DJ Punjab safely and efficiently.
-
Steps to Download 60 Lakh Song from DJ Punjab
-
Downloading 60 Lakh song from DJ Punjab is very easy and fast. You just need to follow these four steps:
-
Step 1: Visit the official website of DJ Punjab
-
The first step is to visit the official website of DJ Punjab. You can use any web browser on your device to access it. The website address is djpunjab.com. You will see a homepage with various categories, menus, and options.
-
Step 2: Search for 60 Lakh song in the search box
-
The next step is to search for 60 Lakh song in the search box. You can find the search box at the top right corner of the homepage. Type in "60 Lakh" or "Bukka Jatt" or "R Nait" in the search box and hit enter. You will see a list of results related to your query
Step 3: Select the desired quality and format of the song
-
The third step is to select the desired quality and format of the song. You can find different options for downloading the song on the result page. You can choose from MP3, MP4, HD, or 320 kbps formats. You can also see the file size and duration of the song before downloading. Choose the option that suits your device and preference.
-
Step 4: Click on the download button and save the song to your device
-
The final step is to click on the download button and save the song to your device. You can find the download button below the selected option. Click on it and wait for a few seconds. The song will start downloading automatically. You can check the progress of the download in your browser or in your device's download folder. Once the download is complete, you can enjoy listening to 60 Lakh song offline.
-
Tips and Tricks for Downloading Songs from DJ Punjab
-
Downloading songs from DJ Punjab is easy and convenient, but there are some tips and tricks that you can follow to make it even better. Here are some of them:
-
Use a VPN or proxy to access DJ Punjab if it is blocked in your region
-
DJ Punjab is a free website that offers Punjabi songs download, but it may not be accessible in some regions due to legal or technical issues. If you face any problem in accessing DJ Punjab, you can use a VPN or proxy service to bypass the restrictions. A VPN or proxy service will change your IP address and location, and allow you to access DJ Punjab from anywhere in the world.
-
60 lakh punjabi song mp3 download
-60 lakh r nait song download
-60 lakh bukka jatt song download
-60 lakh gopy randhawa song download
-60 lakh new punjabi song download
-60 lakh song download mr jatt
-60 lakh song download djpunjab.com
-60 lakh song download pagalworld
-60 lakh song download mp3tau
-60 lakh song download raag.fm
-60 lakh video song download hdyaar
-60 lakh video song download mp4
-60 lakh video song download djjohal
-60 lakh video song download pendujatt
-60 lakh video song download riskyjatt
-60 lakh lyrics r nait song download
-60 lakh lyrics bukka jatt song download
-60 lakh lyrics gopy randhawa song download
-60 lakh lyrics in punjabi song download
-60 lakh lyrics in hindi song download
-60 lakh remix dj hans song download
-60 lakh remix dj lishkara song download
-60 lakh remix dj sunny qadian song download
-60 lakh remix dj baapu song download
-60 lakh remix dj youngster song download
-60 lakh ringtone r nait song download
-60 lakh ringtone bukka jatt song download
-60 lakh ringtone gopy randhawa song download
-60 lakh ringtone mp3 song download
-60 lakh ringtone zedge song download
-60 lakh status r nait song download
-60 lakh status bukka jatt song download
-60 lakh status gopy randhawa song download
-60 lakh status video song download
-60 lakh status whatsapp song download
-60 lakh karaoke r nait song download
-60 lakh karaoke bukka jatt song download
-60 lakh karaoke gopy randhawa song download
-60 lakh karaoke mp3 song download
-60 lakh karaoke with lyrics song download
-60 lakh instrumental r nait song download
-60 lakh instrumental bukka jatt song download
-60 lakh instrumental gopy randhawa song download
-60 lakh instrumental mp3 song download
-60 lakh instrumental beatcop music song download
-60 lakh mashup r nait song download
-60 lakh mashup bukka jatt song download
-60 lakh mashup gopy randhawa song download
-60 lakh mashup mp3 song download
-
Check the file size and duration of the song before downloading to avoid fake or incomplete downloads
-
DJ Punjab is a reliable website that offers high-quality Punjabi songs download, but sometimes you may encounter fake or incomplete downloads. These are files that have a smaller size or shorter duration than the original song, and may contain malware or viruses. To avoid these, you should always check the file size and duration of the song before downloading. You can compare them with the information given on YouTube or other sources. If you find any discrepancy, you should avoid downloading that file and look for another option.
-
Use a reliable antivirus software to scan the downloaded files for any malware or viruses
-
DJ Punjab is a safe website that offers virus-free Punjabi songs download, but sometimes you may still get infected by malware or viruses from other sources. These are malicious programs that can harm your device or steal your data. To prevent these, you should always use a reliable antivirus software to scan the downloaded files for any malware or viruses. You should also update your antivirus software regularly to keep it up to date with the latest threats.
-
Conclusion
-
Punjabi music is a great way to enjoy yourself and express your emotions. 60 Lakh is one of the latest and most popular songs in Punjabi music that you can download from DJ Punjab. DJ Punjab is a wonderful website that offers a huge collection of Punjabi songs, albums, videos, and more. You can download songs from DJ Punjab in simple steps and in different formats and qualities.
-
However, if you are looking for some alternatives to DJ Punjab for Punjabi songs download, you can try these websites as well:
-
-
Mr Jatt: Mr Jatt is another popular website for Punjabi songs download that offers a wide range of songs from various genres, artists, and eras.
-
Pendu Jatt: Pendu Jatt is a website that specializes in Punjabi songs download that offers exclusive and latest songs from top Punjabi singers and musicians.
-
Dj Youngster: Dj Youngster is a website that focuses on Punjabi songs download that offers high-quality and fast downloads of new and old Punjabi songs.
-
-
We hope this article has helped you learn how to download 60 Lakh song from DJ Punjab. If you have any questions or feedback, please let us know in the comments section below. Thank you for reading!
-
FAQs
-
Is DJ Punjab legal and safe to use?
-
DJ Punjab is a legal and safe website to use for Punjabi songs download, as long as you use it for personal and non-commercial purposes only. However, some of the songs on DJ Punjab may be copyrighted by their respective owners, so you should always respect their rights and follow their terms and conditions.
-
How can I download Punjabi songs from YouTube?
-
You can download Punjabi songs from YouTube by using a third-party website or software that can convert YouTube videos to MP3 or MP4 files. Some of the websites that you can use are ytmp3.cc, y2mate.com, and flvto.biz. However, you should be careful when using these websites, as they may contain ads, pop-ups, or malware. You should also respect the rights of the YouTube creators and follow their terms and conditions.
-
What are some of the best Punjabi songs of 2021?
-
Some of the best Punjabi songs of 2021 are:
-
-
Song
Singer
Views on YouTube (as of October 2021)
-
60 Lakh
Bukka Jatt ft. R Nait
4.9 million
-
Brown Munde
AP Dhillon ft. Gurinder Gill and Shinda Kahlon
163 million
-
Bachpan Ka Pyaar
Sahdev Dirdo ft. Badshah and Aastha Gill
64 million
-
Pani Di Gal
Maninder Buttar ft. Asees Kaur and Jasmin Bhasin
197 million
-
Baarish Ki Jaaye
B Praak ft. Nawazuddin Siddiqui and Sunanda Sharma
387 million
-
-
How can I listen to Punjabi songs online for free?
-
You can listen to Punjabi songs online for free by using various streaming platforms and apps that offer Punjabi music. Some of the platforms and apps that you can use are Gaana, JioSaavn, Spotify, Wynk Music, and Hungama Music. You can browse through different categories, playlists, and recommendations on these platforms and apps. You can also create your own account and customize your preferences.
-
What are some of the features of Punjabi music industry?
-
Punjabi music industry is one of the most vibrant and dynamic music industries in India and the world. It has some distinctive features, such as:
-
-
Punjabi music is influenced by various cultures, regions, and languages, such as Sikhism, Sufism, folk, rap, pop, rock, and more.
-
Punjabi music is known for its lively and energetic beats, catchy lyrics, and expressive vocals.
-
Punjabi music is constantly evolving and experimenting with new styles, trends, and collaborations.
-
Punjabi music has a huge fan base and market, both in India and abroad.
-
Punjabi music has produced many talented and famous singers, musicians, composers, and producers.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cmo descargar Video Poker Jackpot APK y ganar grandes premios.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cmo descargar Video Poker Jackpot APK y ganar grandes premios.md
deleted file mode 100644
index 9d85bceaa86519c9e088b8d41d910b48cf63d15b..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cmo descargar Video Poker Jackpot APK y ganar grandes premios.md
+++ /dev/null
@@ -1,181 +0,0 @@
-
-
Descargar Video Poker Jackpot APK: How to Play and Win Big
-
If you love playing video poker, you will love Video Poker Jackpot, a fun and addictive game for Android devices. In this article, we will show you how to download and install Video Poker Jackpot APK, how to play and win big at this game, and some stories and testimonials of video poker jackpot winners. Let's get started!
Video Poker Jackpot is a free video poker game that you can play on your Android phone or tablet. It is one of the most popular video poker games on Google Play, with over 1 million downloads and a 4.5-star rating. You can enjoy playing various video poker variants, such as Jacks or Better, Deuces Wild, Double Bonus Poker, and more. You can also compete with other players in tournaments and leaderboards, and win huge jackpots.
-
Features and benefits of the game
-
Some of the features and benefits of playing Video Poker Jackpot are:
-
-
You can play offline or online, anytime and anywhere.
-
You can choose from different bet sizes, from 1 to 500 coins.
-
You can get free coins every hour, every day, and every week.
-
You can claim daily bonuses and rewards for playing.
-
You can customize your game settings, such as sound, speed, auto-hold, etc.
-
You can enjoy realistic graphics, animations, and sounds.
-
You can learn how to play video poker with the help section.
-
-
How to download and install Video Poker Jackpot APK
-
Steps to download the APK file from a trusted source
-
If you want to download Video Poker Jackpot APK, you need to follow these steps:
-
-
Go to a trusted website that offers the APK file of Video Poker Jackpot, such as [Uptodown](^1^).
-
Click on the green button that says "Download" or "Descargar".
-
Wait for the download to finish. You may need to allow downloads from unknown sources in your device settings.
-
Locate the downloaded APK file in your device storage.
-
-
Steps to install the APK file on your device
-
After you have downloaded the APK file of Video Poker Jackpot, you need to install it on your device. Here are the steps:
-
descargar video poker jackpot gratis para android
-descargar video poker jackpot uptodown
-descargar video poker jackpot mod apk
-descargar video poker jackpot full apk
-descargar video poker jackpot sin internet
-descargar video poker jackpot con dinero real
-descargar video poker jackpot en español
-descargar video poker jackpot hackeado
-descargar video poker jackpot offline
-descargar video poker jackpot online
-descargar video poker jackpot pro apk
-descargar video poker jackpot premium apk
-descargar video poker jackpot ultima version
-descargar video poker jackpot 2023 apk
-descargar video poker jackpot para pc
-descargar video poker jackpot para iphone
-descargar video poker jackpot para tablet
-descargar video poker jackpot para celular
-descargar video poker jackpot para smart tv
-descargar video poker jackpot para fire tv
-descargar video poker jackpot de casino
-descargar video poker jackpot de las vegas
-descargar video poker jackpot de texas holdem
-descargar video poker jackpot de joker wild
-descargar video poker jackpot de double bonus
-descargar video poker jackpot con bonus gratis
-descargar video poker jackpot con giros gratis
-descargar video poker jackpot con premios reales
-descargar video poker jackpot con torneos
-descargar video poker jackpot con amigos
-como descargar video poker jackpot apk
-como jugar video poker jackpot apk
-como ganar en video poker jackpot apk
-como hackear video poker jackpot apk
-como actualizar video poker jackpot apk
-mejor app para descargar video poker jackpot apk
-mejor sitio para descargar video poker jackpot apk
-mejor juego de video poker jackpot apk
-mejor forma de jugar video poker jackpot apk
-mejor estrategia para ganar en video poker jackpot apk
-opiniones sobre descargar video poker jackpot apk
-reseñas de descargar video poker jackpot apk
-ventajas de descargar video poker jackpot apk
-desventajas de descargar video poker jackpot apk
-alternativas a descargar video poker jackpot apk
-soluciones a problemas al descargar video poker jackpot apk
-trucos y consejos para descargar video poker jackpot apk
-guia completa para descargar video poker jackpot apk
-tutorial paso a paso para descargar video poker jackpot apk
-
-
Tap on the APK file that you have downloaded.
-
A pop-up window will appear asking you to confirm the installation. Tap on "Install" or "Instalar".
-
Wait for the installation to complete. You may need to grant some permissions to the app.
-
Once the installation is done, you can open the app and start playing Video Poker Jackpot.
-
-
How to play Video Poker Jackpot
-
The rules and objective of video poker
-
Video poker is a casino game that is based on five-card draw poker. The objective of the game is to make the best possible poker hand out of the five cards that you are dealt. You can choose to keep or discard any of the cards, and replace them with new ones from the same deck. The payout of the game depends on the strength of your final hand and the paytable of the game variant that you are playing.
-
The different variants and paytables of video poker
-
Video Poker Jackpot offers you several video poker variants to choose from, each with its own rules and paytable. Some of the variants are:
-
-
-
Variant
-
Rules
-
Paytable (for 1 coin bet)
-
-
-
Jacks or Better
-
The most basic and common variant of video poker. You need at least a pair of jacks or better to win.
-
Royal Flush: 250 Straight Flush: 50 Four of a Kind: 25 Full House: 9 Flush: 6 Straight: 4 Three of a Kind: 3 Two Pair: 2 Jacks or Better: 1
-
-
-
Deuces Wild
-
All the twos in the deck are wild cards, meaning they can substitute for any other card to make a winning hand. You need at least a three of a kind to win.
-
Natural Royal Flush: 250 Four Deuces: 200 Wild Royal Flush: 25 Five of a Kind: 15 Straight Flush: 9 Four of a Kind: 4 Full House: 4 Flush: 3 Straight: 2 Three of a Kind: 1
-
-
-
Double Bonus Poker
-
A variant of Jacks or Better that pays extra for four aces, four twos, threes, or fours, and four fives through kings. You need at least a pair of jacks or better to win.
-
Royal Flush: 250 Straight Flush: 50 Four Aces: 160 Four Twos, Threes, or Fours: 80 Four Fives through Kings: 50 Full House: 10 Flush: 7 Straight: 5 Three of a Kind: 3 Two Pair: 1 Jacks or Better: 1
-
-
-
The tips and strategies to improve your chances of winning
-
To play Video Poker Jackpot effectively, you need to follow some tips and strategies, such as:
-
-
Always bet the maximum number of coins, as this will give you the highest payout for a royal flush.
-
Always check the paytable of the game variant that you are playing, and choose the one that offers the best return.
-
Always use the auto-hold feature, as this will help you make the optimal decisions based on the mathematically correct strategy.
-
Always keep any winning hand, unless you have a chance to improve it to a higher-paying hand.
-
Always keep any four cards to a royal flush, straight flush, or flush.
-
Always keep any three cards to a royal flush or straight flush.
-
Always keep any pair, unless you have four cards to a straight or flush.
-
Always keep any high card (jack or better), unless you have three cards to a straight or flush.
-
Always discard any unpaired low cards (ten or lower), unless you have four cards to a straight or flush.
-
-
How to win big at Video Poker Jackpot
-
The best hands and payouts in video poker
-
The best hands in video poker are the ones that pay the most, depending on the game variant and the number of coins that you bet. Here are some examples:
- - A royal flush is the highest-paying hand in video poker. It consists of an ace, king, queen, jack, and ten of the same suit. It pays 250 coins for a one-coin bet, but it pays a whopping 4,000 coins for a five-coin bet. That's why it's important to - A straight flush is the second-highest-paying hand in video poker. It consists of five consecutive cards of the same suit. It pays 50 coins for a one-coin bet, and 250 coins for a five-coin bet. - A four of a kind is the third-highest-paying hand in video poker. It consists of four cards of the same rank. It pays 25 coins for a one-coin bet, and 125 coins for a five-coin bet. However, some game variants pay more for certain four of a kinds, such as four aces or four deuces. - A full house is the fourth-highest-paying hand in video poker. It consists of three cards of the same rank and two cards of another rank. It pays 9 coins for a one-coin bet, and 45 coins for a five-coin bet. - A flush is the fifth-highest-paying hand in video poker. It consists of five cards of the same suit. It pays 6 coins for a one-coin bet, and 30 coins for a five-coin bet.
The jackpot feature and how to trigger it
-
One of the most exciting features of Video Poker Jackpot is the jackpot feature, which gives you a chance to win a huge amount of coins. The jackpot feature is triggered randomly after any winning hand. When it happens, you will see a wheel with different segments, each with a multiplier value. You can spin the wheel once, and whatever multiplier you land on will be applied to your current win. For example, if you win 100 coins and spin the wheel and get a 10x multiplier, you will win 1,000 coins.
-
The jackpot feature also has a special segment that says "Jackpot". If you are lucky enough to land on this segment, you will win the progressive jackpot, which is the highest prize in the game. The progressive jackpot is a pool of coins that increases every time someone plays Video Poker Jackpot. You can see the current amount of the jackpot on the top of the screen.
-
The stories and testimonials of video poker jackpot winners
-
Many players have won big at Video Poker Jackpot, and some of them have shared their stories and testimonials on the game's review section on Google Play. Here are some examples:
-
-
"I love this game! I won the jackpot twice in one day! I couldn't believe it! Thank you so much for this awesome game!" - Maria
-
-
-
"This is the best video poker game ever! I play it every day and I always have fun. I hit the jackpot last week and I was so happy! I recommend this game to everyone who loves video poker!" - John
-
-
-
"Wow! This game is amazing! I just won the jackpot and I'm speechless! This game is very generous and rewarding. I'm so glad I found it!" - Lisa
-
-
Conclusion
-
Video Poker Jackpot is a great game for video poker lovers who want to play on their Android devices. You can download and install Video Poker Jackpot APK from a trusted source, and enjoy playing various video poker variants with realistic graphics and sounds. You can also win big at this game by following some tips and strategies, and by triggering the jackpot feature. If you are lucky, you might join the club of video poker jackpot winners who have shared their stories and testimonials on Google Play.
-
So what are you waiting for? Download Video Poker Jackpot APK today and start playing and winning big!
-
FAQs
-
What are the advantages of playing video poker online?
-
Some of the advantages of playing video poker online are:
-
-
You can play anytime and anywhere, as long as you have an internet connection.
-
You can choose from a wide range of video poker variants, each with its own rules and paytable.
-
You can play at your own pace, without any pressure from other players or dealers.
-
You can practice your skills and strategies without risking any real money.
-
You can enjoy various bonuses and rewards from online casinos or apps.
-
-
Is Video Poker Jackpot safe and secure?
-
Yes, Video Poker Jackpot is safe and secure, as long as you download and install it from a trusted source, such as [Uptodown]. The app does not contain any malware or viruses that could harm your device or data. The app also uses encryption technology to protect your transactions and personal information.
-
How can I get free coins and bonuses in Video Poker Jackpot?
-
There are several ways to get free coins and bonuses in Video Poker Jackpot, such as:
-
-
You can get free coins every hour, every day, and every week by tapping on the coin icon on the top of the screen.
-
You can claim daily bonuses and rewards by tapping on the gift icon on the top of the screen.
-
You can watch video ads to earn extra coins by tapping on the video icon on the bottom of the screen.
-
You can invite your friends to play Video Poker Jackpot and get referral bonuses by tapping on the invite icon on the bottom of the screen.
-
You can participate in tournaments and leaderboards and win coins and prizes by tapping on the trophy icon on the bottom of the screen.
-
-
What are the system requirements for Video Poker Jackpot?
-
The system requirements for Video Poker Jackpot are:
-
-
An Android device with version 4.1 or higher.
-
At least 100 MB of free storage space.
-
An internet connection (optional, but recommended).
-
-
How can I contact the support team of Video Poker Jackpot?
-
If you have any questions, feedback, or issues with Video Poker Jackpot, you can contact the support team by:
-
-
Tapping on the menu icon on the top left of the screen and selecting "Contact Us".
-
Sending an email to videopokerjackpot@gmail.com.
-
Visiting their Facebook page at [Video Poker Jackpot].
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/experimental/rl/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/experimental/rl/__init__.py
deleted file mode 100644
index 26e6f1557fca4e89f7fde75655a15055095b9af1..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/experimental/rl/__init__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# flake8: noqa
-
-from .value_guided_sampling import ValueGuidedRLPipeline
diff --git a/spaces/A00001/bingothoo/src/state/index.ts b/spaces/A00001/bingothoo/src/state/index.ts
deleted file mode 100644
index 272106d619c69f124ea8dd0f10872ab4840152d7..0000000000000000000000000000000000000000
--- a/spaces/A00001/bingothoo/src/state/index.ts
+++ /dev/null
@@ -1,118 +0,0 @@
-import { BingWebBot } from '@/lib/bots/bing'
-import { BingConversationStyle, ChatMessageModel, BotId } from '@/lib/bots/bing/types'
-import { nanoid } from '@/lib/utils'
-import { atom } from 'jotai'
-import { atomWithImmer } from 'jotai-immer'
-import { atomWithStorage } from 'jotai/utils'
-import { atomFamily } from 'jotai/utils'
-import { atomWithHash, atomWithLocation } from 'jotai-location'
-
-const initialMessages: ChatMessageModel[] = [
- { author: 'system', text: 'conversation between user and robot', id: '1' },
- { author: 'user', text: '销量最高的 3 种宠物吸尘器有哪些优点和缺点? ', id: '2' },
- {
- author: 'bot', text: `
-您好,这是必应。根据网上的搜索结果,我为您找到了以下三款销量最高的宠物吸尘器,以及它们的优点和缺点:
-
-- **戴森Dyson V10轻量版 Digital Slim Fluffy无线吸尘器**[^1^] [^3^]:这款吸尘器专为宠物家庭设计,拥有强大吸力和多功能吸头,能够有效清除宠物毛发和灰尘。
- - 优点:轻便易用,续航时间长,过滤系统先进,能够有效去除过敏原;
- - 缺点:价格较高,噪音较大,需要定期清洁滤网。
-- **飞利浦FC6908/82(W7)无线吸尘器**[^2^] [^4^]:这是一款手持式干吸无线吸尘器,专门为宠物家庭设计。它具有75分钟的超高续航,适合大户型;除螨涡轮刷,可以深度清洁床单被罩,免受螨虫侵害;吸拖一体设计,280ml大水箱,还可以往水箱里加入清洁剂。
- - 优点:清洁性能强劲,操作方便,适用多种场景;
- - 缺点:噪音较大,达到84dB(A);尘杯容量较小,只有0.6L;充电时长较长,需要5小时。
-- **必胜(Bissell) 宠物吸尘器**[^2^]:这是一款便携式手持真空吸尘器,专为清理宠物毛发而设计。它配备特殊的橡胶喷嘴和软毛刷头,可以轻松拾取宠物毛发和灰尘;几乎静音运行。
- - 优点:轻巧易用,重量只有1.36kg;电池可充电可更换;
- - 缺点:续航时间不稳定,在15-30分钟之间浮动;吸力不够强劲,在地毯或沙发上可能效果不佳;没有过滤系统或过滤网。
-
-希望这些信息对您有所帮助。如果您还有其他问题,请随时提问。😊
- `, id: '3' },
- { author: 'user', text: '今天的新闻', id: '4' },
- {
- author: 'bot', text: `你好,这是Bing。我根据你的搜索查询“今天的新闻”为你找到了以下信息:
-
- # 中国新闻
- - 《求是》杂志发表习近平总书记重要文章《深化党和国家机构改革 推进国家治理体系和治理能力现代化》[^1^]
- - 梦之队第5金! 全红婵陈芋汐女双10米台夺冠[^1^]
- - 中央气象台7月16日18时发布台风橙色预警[^1^]
- - 贵州石阡:暑期旅游带动乡村振兴[^1^]
- - 激活大科学装置的“人才红利”[^1^]
- - 聚才用才留才 让希望的田野成为智慧的田野[^1^]
- - 科学家发现粮食系统增效可有效支撑我国碳中和[^1^]
- - 成都以赛为媒提升城市美誉度[^1^]
- - 香江观澜:“抢人才”成效显著 香港竞争力依旧[^1^]
- - 浙江建德:新安江上享清凉[^1^]
- - 定向资金持续精准滴灌 高质量发展力量不断积累[^1^]
- - 企业信心恢复加大中长期投资 为稳增长奠定基础[^1^]
- - 武汉渡江节开赛 千名“泳士”竞渡长江[^1^]
- - 金融更加重视对实体经济支持 降低融资成本提振市场信心[^1^]
- - 大运来了丨成都迎大运 全民健身动起来[^1^]
- - 藏在高校里的宝藏博物馆[^1^]
- - 中国汽车工业用70年赢得三个“全球第一”[^1^]
- - 辽宁阜新市委常委、副市长赖化平涉嫌严重违纪违法接受审查调查[^1^]
-
- # 国际新闻
- - 联合国秘书长:可持续发展面临风险 各国需加强行动[^2^]
- - 国际航运业加快绿色转型[^2^]
- - 美企反对收紧对华芯片出口限制[^2^]
- - 欧洲加大气候科技领域投资[^2^]
- - 中企助力丹麦发展清洁能源[^2^]
- - 中国代表呼吁国际社会共同努力防止乌克兰局势失控[^2^]
- - 中国和阿尔及利亚共同构建新型国际关系典范[^2^]
-
- 以上信息仅供参考,具体详情请点击以下链接查看:
-
- [1] [中国新闻_央视网(cctv.com)](https://news.cctv.com/china/)
- [2] [人民网_网上的人民日报](http://www.people.com.cn/)`, id: '5' },
- { author: 'user', text: '写一个快排', id: '6' },
- { author: 'bot', text: "你好,这是Bing。我可以帮你用python写一个快排。快速排序是一种基于分治思想的排序算法,它的基本思想是:从待排序的数组中选择一个基准元素,将数组分为两个子数组,使得左边的子数组中的元素都小于或等于基准元素,右边的子数组中的元素都大于或等于基准元素,然后对两个子数组递归地进行快速排序,最后合并结果。下面是用python实现的快速排序的代码:\n\n```python\n# 定义一个函数,用于交换数组中的两个元素\ndef swap(arr, i, j):\n temp = arr[i]\n arr[i] = arr[j]\n arr[j] = temp\n\n# 定义一个函数,用于对数组的一段区间进行划分\ndef partition(arr, low, high):\n # 选择最右边的元素作为基准元素\n pivot = arr[high]\n # 初始化一个指针,指向左边区间的最后一个位置\n i = low - 1\n # 遍历左边区间中的每个元素\n for j in range(low, high):\n # 如果当前元素小于或等于基准元素\n if arr[j] <= pivot:\n # 将指针右移一位\n i += 1\n # 将指针所指的元素与当前元素交换\n swap(arr, i, j)\n # 将基准元素与指针右边的第一个元素交换\n swap(arr, i + 1, high)\n # 返回基准元素的最终位置\n return i + 1\n\n# 定义一个函数,用于对数组进行快速排序\ndef quick_sort(arr, low, high):\n # 如果区间长度大于1\n if low < high:\n # 对区间进行划分,得到基准元素的位置\n p = partition(arr, low, high)\n # 对左边子区间进行快速排序\n quick_sort(arr, low, p - 1)\n # 对右边子区间进行快速排序\n quick_sort(arr, p + 1, high)\n\n# 测试代码\narr = [5, 3, 7, 2, 9, 1, 4, 6, 8]\nprint(\"原始数组:\", arr)\nquick_sort(arr, 0, len(arr) - 1)\nprint(\"排序后的数组:\", arr)\n```", id: '7' },
- {
- author: 'bot', text: "好的,我会尝试画一只猫。\n > ",
- id: '8'
- }
-]
-
-export const GreetMessages = [
- '谢谢你! 知道你什么时候准备好继续前进总是很有帮助的。我现在能为你回答什么问题?',
- '重新开始总是很棒。问我任何问题!',
- '当然,我很乐意重新开始。我现在可以为你提供哪些帮助?',
- '当然,我已准备好进行新的挑战。我现在可以为你做什么?',
- '很好,让我们来更改主题。你在想什么?',
- '不用担心,我很高兴尝试一些新内容。我现在可以为你回答什么问题?',
- '好的,我准备好了!感谢重置。我们应该了解哪些内容?',
- '感谢刷新!你有新的话题吗?',
- '明白了,让我们重新开始。接下来应该讨论什么?',
- '下一步!我可以为你做什么?',
- '好的,我已准备好新话题。我们应该一起了解哪些内容?'
-]
-
-export const bingConversationStyleAtom = atomWithStorage('bingConversationStyle', BingConversationStyle.Creative, undefined, { unstable_getOnInit: true })
-export const voiceAtom = atomWithStorage('enableTTS', false, undefined, { unstable_getOnInit: true })
-
-type Param = { botId: BotId; page: string }
-
-const createBotInstance = () => {
- return new BingWebBot({
- cookie: ' ',
- ua: ' ',
- })
-}
-
-export const chatFamily = atomFamily(
- (param: Param) => {
- return atomWithImmer({
- botId: param.botId,
- bot: createBotInstance(),
- messages: [] as ChatMessageModel[],
- generatingMessageId: '',
- abortController: undefined as AbortController | undefined,
- conversationId: nanoid(),
- })
- },
- (a, b) => a.botId === b.botId && a.page === b.page,
-)
-
-export const hashAtom = atomWithHash('dialog', '')
-
-export const locationAtom = atomWithLocation()
-
-export const voiceListenAtom = atom(false)
diff --git a/spaces/AB-TW/team-ai/app.py b/spaces/AB-TW/team-ai/app.py
deleted file mode 100644
index f054f1b45cab7549a8e92ed1fa78b08cdae155ac..0000000000000000000000000000000000000000
--- a/spaces/AB-TW/team-ai/app.py
+++ /dev/null
@@ -1,190 +0,0 @@
-import gradio as gr
-from langchain.document_loaders import TextLoader
-from agents.tools.python_code_tool import generate_and_excute_python_code
-from agents.tools.shell_tool import generate_and_excute_shell_code
-from chains import HumanFeedBackChain, contextRewriteChain
-from embedding import CustomEmbedding
-from memories import HumenFeedbackBufferMemory
-from agents.code_generate_agent import code_agent_executor, code_agent_tools
-from agents.code_execute_agent import generate_and_excute_code_agent
-
-
-baMemory = HumenFeedbackBufferMemory(
- input_key="input", human_prefix="Answer", ai_prefix="AI")
-baChain = HumanFeedBackChain(verbose=True, memory=baMemory)
-
-"""读取document/business_context.py文件内容作为context"""
-context_path = "./documents/bussiness_context/business_context.md"
-
-
-def sendMessage(chatbot, input):
- chatbot.append((
- (None if len(input) == 0 else input), None))
- return chatbot
-
-
-def clearMemory(chatbot):
- chatbot.clear()
- if baMemory != None:
- baMemory.clear()
- return chatbot, ""
-
-def loadContext():
- textloader = TextLoader(context_path)
- return textloader.load()[0].page_content
-
-
-def saveContext(context):
- with open(context_path, 'w') as f:
- f.write(context)
-
-def feedBack(context, story, chatbot=[], input=""):
- if len(input) > 0:
- context += (f"\n\n {input}")
- saveContext(context)
- response = baChain.run(
- input=(input if len(input) == 0 else input), context=context, story=story, stop="\nAnswer:")
- chatbot[-1][1] = response
- return chatbot, "", context
-
-
-customerEmbedding = CustomEmbedding()
-
-faqChain = customerEmbedding.getFAQAgent()
-
-code_agent_executor = code_agent_executor()
-def faqFromLocal(input, chatbot=[]):
- # response = faqChain({"question": f"{input}"})
- response = faqChain.run(input)
- chatbot.append((input, response))
- return chatbot, ""
-
-
-def generateEmbeddings(chatbot=[]):
- response = customerEmbedding.calculateEmbedding()
- chatbot.append((None, response))
- return chatbot
-
-
-def generateCode(input: str, chatbot=[], returnCode=False):
- if len(input) <=0:
- chatbot[-1][1] = None
- return chatbot, ""
- response = code_agent_executor.run(
- input=(input if len(input) == 0 else input))
- chatbot[-1][1] = response
- return chatbot, ""
-
-def generateCodeByMultiPart(context: str, relateCode: str, toolName: str, chatbot=[]):
- input = f"请根据如下信息{toolName}:\n{context}\n\n{relateCode}"
- return generateCode(input, chatbot)
-
-def sendMessageByMultiPart(chatbot, context: str, relateCode: str, toolName: str):
- input = f"请根据如下信息{toolName}:\n{context}\n\n{relateCode}"
- chatbot.append((input, None))
- return chatbot
-
-
-def rewriteContext(input, chatbot):
- response = contextRewriteChain.run(input=input, verbose=True)
- chatbot.append((input, response))
- return chatbot, response
-
-def generateCodeAndExcute(input, chatbot=[], language="python"):
- request = f'''write a {language} script to solve the following problem and return code and the results:\n{input}'''
- result = generate_and_excute_code_agent.run(request)
- chatbot.append((input, result))
- return chatbot
-
-def generatePyhonCodeAndExcute(input, chatbot=[]):
- request = f'''write a {language} script to solve the following problem and return code and the results:\n{input}'''
- result = generate_and_excute_python_code.run(request)
- chatbot.append((input, result))
- return chatbot
-
-def generateShellCodeAndExcute(input, chatbot=[]):
- request = f'''write a {language} script to solve the following problem and return code and the results:\n{input}'''
- result = generate_and_excute_shell_code.run(request)
- chatbot.append((input, result))
- return chatbot
-
-toolTextBox = []
-with gr.Blocks() as demo:
- with gr.Row():
- with gr.Tab("Business"):
- with gr.Row():
- with gr.Column():
- chatbot = gr.Chatbot().style()
- with gr.Row():
- txt = gr.Textbox(show_label=False, placeholder="Enter text and press enter").style(
- container=False)
- with gr.Column():
- with gr.Row():
- context = gr.Textbox(show_label=True, label="Context", placeholder="Enter Context").style(
- container=False)
- with gr.Row():
- story = gr.Textbox(show_label=True, label="User Story", placeholder="Enter User Story").style(
- container=False)
- with gr.Row():
- gr.Button("Generate Scenarios").click(clearMemory, [chatbot], [chatbot, txt]).then(sendMessage, [chatbot, txt], [chatbot]).then(
- feedBack, [context, story, chatbot], [chatbot, txt])
- with gr.Row():
- with gr.Column(scale=5):
- gr.Button("Rewrite Context").click(rewriteContext, [context, chatbot], [chatbot, context])
- with gr.Column(scale=1):
- gr.Button("Revert").click(loadContext, [], [context])
- with gr.Row():
- gr.Button("Save Context").click(saveContext, [context], [])
-
- with gr.Tab("Tech"):
- with gr.Row():
- with gr.Column():
- code_chatbot = gr.Chatbot().style()
- with gr.Row():
- code = gr.Textbox(show_label=False, label="Code Generate", placeholder="Enter text and press enter").style(
- container=False)
- with gr.Column():
- with gr.Row():
- code_context = gr.Textbox(show_label=True, label="Context", placeholder="Enter Context").style(
- container=False)
- with gr.Row():
- relateCode = gr.Textbox(show_label=True, label="Relate Code", placeholder="Enter Relate Code").style(
- container=False)
- for index, tool in enumerate(code_agent_tools):
- with gr.Row():
- toolTextBox.append(gr.Textbox(show_label=False, visible=False, label=tool.name, value=tool.name).style())
- gr.Button(tool.name).click(
- sendMessageByMultiPart, [code_chatbot, code_context, relateCode, toolTextBox[index]], [code_chatbot]).then(
- generateCodeByMultiPart, [code_context, relateCode, toolTextBox[index], code_chatbot], [code_chatbot, code])
- with gr.Tab("FAQ"):
- faq_chatbot = gr.Chatbot().style()
- with gr.Row():
- faq = gr.Textbox(show_label=False, placeholder="Enter text and press enter").style(
- container=False)
- with gr.Row():
- gr.Button("Regenerate embedding").click(generateEmbeddings,[faq_chatbot], [faq_chatbot])
- with gr.Tab("TOOL"):
- with gr.Row():
- with gr.Column():
- tool_request = gr.Textbox(show_label=False, placeholder="Enter your tool Request").style(
- container=False, show_copy_button=True)
- language = gr.Dropdown(choices=["Python", "Shell"], label="Language", value="Python").style()
- tool_button = gr.Button("Generate Code and Execute with agent")
- python_tool_button = gr.Button("Generate Python Code and Execute")
- shell_tool_button = gr.Button("Generate Sehll Code and Execute")
- with gr.Column():
- tool_chatbot = gr.Chatbot(elem_id="chatbot").style(container=False)
- tool_button.click(generateCodeAndExcute,[tool_request, tool_chatbot, language], [tool_chatbot])
- python_tool_button.click(generatePyhonCodeAndExcute,[tool_request, tool_chatbot], [tool_chatbot])
- shell_tool_button.click(generateShellCodeAndExcute,[tool_request, tool_chatbot], [tool_chatbot])
-
- txt.submit(sendMessage, [chatbot, txt], [chatbot]).then(
- feedBack, [context, story, chatbot, txt], [chatbot, txt, context])
-
- code.submit(sendMessage, [code_chatbot, code], [code_chatbot]).then(
- generateCode, [code, code_chatbot], [code_chatbot, code])
-
- faq.submit(faqFromLocal, [faq, faq_chatbot], [faq_chatbot, faq])
-
- demo.load(loadContext, [], [context])
-demo.launch()
diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/transforms.py b/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/transforms.py
deleted file mode 100644
index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000
--- a/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/transforms.py
+++ /dev/null
@@ -1,209 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
-
-
-def unconstrained_rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails="linear",
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == "linear":
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError("{} tails are not implemented.".format(tails))
-
- (
- outputs[inside_interval_mask],
- logabsdet[inside_interval_mask],
- ) = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound,
- right=tail_bound,
- bottom=-tail_bound,
- top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- )
-
- return outputs, logabsdet
-
-
-def rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0.0,
- right=1.0,
- bottom=0.0,
- top=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError("Input to a transform is not within its domain")
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError("Minimal bin width too large for the number of bins")
- if min_bin_height * num_bins > 1.0:
- raise ValueError("Minimal bin height too large for the number of bins")
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- ) + input_heights * (input_delta - input_derivatives)
- b = input_heights * input_derivatives - (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- )
- c = -input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (
- input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
- )
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/AI-Zero-to-Hero/03-GR-AI-Text2ArtGenerator/README.md b/spaces/AI-Zero-to-Hero/03-GR-AI-Text2ArtGenerator/README.md
deleted file mode 100644
index 8874dcc12ba4c5f683c68d6e394d6a238edf6ea1..0000000000000000000000000000000000000000
--- a/spaces/AI-Zero-to-Hero/03-GR-AI-Text2ArtGenerator/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 03 GR AI Text2ArtGenerator
-emoji: 🦀
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.4
-app_file: app.py
-pinned: false
-license: artistic-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/data/audio_dataset.py b/spaces/AIConsultant/MusicGen/audiocraft/data/audio_dataset.py
deleted file mode 100644
index 9d7442526186b3712f5d4754f928a40ecd964174..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/data/audio_dataset.py
+++ /dev/null
@@ -1,587 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""AudioDataset support. In order to handle a larger number of files
-without having to scan again the folders, we precompute some metadata
-(filename, sample rate, duration), and use that to efficiently sample audio segments.
-"""
-import argparse
-import copy
-from concurrent.futures import ThreadPoolExecutor, Future
-from dataclasses import dataclass, fields
-from contextlib import ExitStack
-from functools import lru_cache
-import gzip
-import json
-import logging
-import os
-from pathlib import Path
-import random
-import sys
-import typing as tp
-
-import torch
-import torch.nn.functional as F
-
-from .audio import audio_read, audio_info
-from .audio_utils import convert_audio
-from .zip import PathInZip
-
-try:
- import dora
-except ImportError:
- dora = None # type: ignore
-
-
-@dataclass(order=True)
-class BaseInfo:
-
- @classmethod
- def _dict2fields(cls, dictionary: dict):
- return {
- field.name: dictionary[field.name]
- for field in fields(cls) if field.name in dictionary
- }
-
- @classmethod
- def from_dict(cls, dictionary: dict):
- _dictionary = cls._dict2fields(dictionary)
- return cls(**_dictionary)
-
- def to_dict(self):
- return {
- field.name: self.__getattribute__(field.name)
- for field in fields(self)
- }
-
-
-@dataclass(order=True)
-class AudioMeta(BaseInfo):
- path: str
- duration: float
- sample_rate: int
- amplitude: tp.Optional[float] = None
- weight: tp.Optional[float] = None
- # info_path is used to load additional information about the audio file that is stored in zip files.
- info_path: tp.Optional[PathInZip] = None
-
- @classmethod
- def from_dict(cls, dictionary: dict):
- base = cls._dict2fields(dictionary)
- if 'info_path' in base and base['info_path'] is not None:
- base['info_path'] = PathInZip(base['info_path'])
- return cls(**base)
-
- def to_dict(self):
- d = super().to_dict()
- if d['info_path'] is not None:
- d['info_path'] = str(d['info_path'])
- return d
-
-
-@dataclass(order=True)
-class SegmentInfo(BaseInfo):
- meta: AudioMeta
- seek_time: float
- # The following values are given once the audio is processed, e.g.
- # at the target sample rate and target number of channels.
- n_frames: int # actual number of frames without padding
- total_frames: int # total number of frames, padding included
- sample_rate: int # actual sample rate
- channels: int # number of audio channels.
-
-
-DEFAULT_EXTS = ['.wav', '.mp3', '.flac', '.ogg', '.m4a']
-
-logger = logging.getLogger(__name__)
-
-
-def _get_audio_meta(file_path: str, minimal: bool = True) -> AudioMeta:
- """AudioMeta from a path to an audio file.
-
- Args:
- file_path (str): Resolved path of valid audio file.
- minimal (bool): Whether to only load the minimal set of metadata (takes longer if not).
- Returns:
- AudioMeta: Audio file path and its metadata.
- """
- info = audio_info(file_path)
- amplitude: tp.Optional[float] = None
- if not minimal:
- wav, sr = audio_read(file_path)
- amplitude = wav.abs().max().item()
- return AudioMeta(file_path, info.duration, info.sample_rate, amplitude)
-
-
-def _resolve_audio_meta(m: AudioMeta, fast: bool = True) -> AudioMeta:
- """If Dora is available as a dependency, try to resolve potential relative paths
- in list of AudioMeta. This method is expected to be used when loading meta from file.
-
- Args:
- m (AudioMeta): Audio meta to resolve.
- fast (bool): If True, uses a really fast check for determining if a file
- is already absolute or not. Only valid on Linux/Mac.
- Returns:
- AudioMeta: Audio meta with resolved path.
- """
- def is_abs(m):
- if fast:
- return str(m)[0] == '/'
- else:
- os.path.isabs(str(m))
-
- if not dora:
- return m
-
- if not is_abs(m.path):
- m.path = dora.git_save.to_absolute_path(m.path)
- if m.info_path is not None and not is_abs(m.info_path.zip_path):
- m.info_path.zip_path = dora.git_save.to_absolute_path(m.path)
- return m
-
-
-def find_audio_files(path: tp.Union[Path, str],
- exts: tp.List[str] = DEFAULT_EXTS,
- resolve: bool = True,
- minimal: bool = True,
- progress: bool = False,
- workers: int = 0) -> tp.List[AudioMeta]:
- """Build a list of AudioMeta from a given path,
- collecting relevant audio files and fetching meta info.
-
- Args:
- path (str or Path): Path to folder containing audio files.
- exts (list of str): List of file extensions to consider for audio files.
- minimal (bool): Whether to only load the minimal set of metadata (takes longer if not).
- progress (bool): Whether to log progress on audio files collection.
- workers (int): number of parallel workers, if 0, use only the current thread.
- Returns:
- list of AudioMeta: List of audio file path and its metadata.
- """
- audio_files = []
- futures: tp.List[Future] = []
- pool: tp.Optional[ThreadPoolExecutor] = None
- with ExitStack() as stack:
- if workers > 0:
- pool = ThreadPoolExecutor(workers)
- stack.enter_context(pool)
-
- if progress:
- print("Finding audio files...")
- for root, folders, files in os.walk(path, followlinks=True):
- for file in files:
- full_path = Path(root) / file
- if full_path.suffix.lower() in exts:
- audio_files.append(full_path)
- if pool is not None:
- futures.append(pool.submit(_get_audio_meta, str(audio_files[-1]), minimal))
- if progress:
- print(format(len(audio_files), " 8d"), end='\r', file=sys.stderr)
-
- if progress:
- print("Getting audio metadata...")
- meta: tp.List[AudioMeta] = []
- for idx, file_path in enumerate(audio_files):
- try:
- if pool is None:
- m = _get_audio_meta(str(file_path), minimal)
- else:
- m = futures[idx].result()
- if resolve:
- m = _resolve_audio_meta(m)
- except Exception as err:
- print("Error with", str(file_path), err, file=sys.stderr)
- continue
- meta.append(m)
- if progress:
- print(format((1 + idx) / len(audio_files), " 3.1%"), end='\r', file=sys.stderr)
- meta.sort()
- return meta
-
-
-def load_audio_meta(path: tp.Union[str, Path],
- resolve: bool = True, fast: bool = True) -> tp.List[AudioMeta]:
- """Load list of AudioMeta from an optionally compressed json file.
-
- Args:
- path (str or Path): Path to JSON file.
- resolve (bool): Whether to resolve the path from AudioMeta (default=True).
- fast (bool): activates some tricks to make things faster.
- Returns:
- list of AudioMeta: List of audio file path and its total duration.
- """
- open_fn = gzip.open if str(path).lower().endswith('.gz') else open
- with open_fn(path, 'rb') as fp: # type: ignore
- lines = fp.readlines()
- meta = []
- for line in lines:
- d = json.loads(line)
- m = AudioMeta.from_dict(d)
- if resolve:
- m = _resolve_audio_meta(m, fast=fast)
- meta.append(m)
- return meta
-
-
-def save_audio_meta(path: tp.Union[str, Path], meta: tp.List[AudioMeta]):
- """Save the audio metadata to the file pointer as json.
-
- Args:
- path (str or Path): Path to JSON file.
- metadata (list of BaseAudioMeta): List of audio meta to save.
- """
- Path(path).parent.mkdir(exist_ok=True, parents=True)
- open_fn = gzip.open if str(path).lower().endswith('.gz') else open
- with open_fn(path, 'wb') as fp: # type: ignore
- for m in meta:
- json_str = json.dumps(m.to_dict()) + '\n'
- json_bytes = json_str.encode('utf-8')
- fp.write(json_bytes)
-
-
-class AudioDataset:
- """Base audio dataset.
-
- The dataset takes a list of AudioMeta and create a dataset composed of segments of audio
- and potentially additional information, by creating random segments from the list of audio
- files referenced in the metadata and applying minimal data pre-processing such as resampling,
- mixing of channels, padding, etc.
-
- If no segment_duration value is provided, the AudioDataset will return the full wav for each
- audio file. Otherwise, it will randomly sample audio files and create a segment of the specified
- duration, applying padding if required.
-
- By default, only the torch Tensor corresponding to the waveform is returned. Setting return_info=True
- allows to return a tuple containing the torch Tensor and additional metadata on the segment and the
- original audio meta.
-
- Note that you can call `start_epoch(epoch)` in order to get
- a deterministic "randomization" for `shuffle=True`.
- For a given epoch and dataset index, this will always return the same extract.
- You can get back some diversity by setting the `shuffle_seed` param.
-
- Args:
- meta (list of AudioMeta): List of audio files metadata.
- segment_duration (float, optional): Optional segment duration of audio to load.
- If not specified, the dataset will load the full audio segment from the file.
- shuffle (bool): Set to `True` to have the data reshuffled at every epoch.
- sample_rate (int): Target sample rate of the loaded audio samples.
- channels (int): Target number of channels of the loaded audio samples.
- sample_on_duration (bool): Set to `True` to sample segments with probability
- dependent on audio file duration. This is only used if `segment_duration` is provided.
- sample_on_weight (bool): Set to `True` to sample segments using the `weight` entry of
- `AudioMeta`. If `sample_on_duration` is also True, the actual weight will be the product
- of the file duration and file weight. This is only used if `segment_duration` is provided.
- min_segment_ratio (float): Minimum segment ratio to use when the audio file
- is shorter than the desired segment.
- max_read_retry (int): Maximum number of retries to sample an audio segment from the dataset.
- return_info (bool): Whether to return the wav only or return wav along with segment info and metadata.
- min_audio_duration (float, optional): Minimum audio file duration, in seconds, if provided
- audio shorter than this will be filtered out.
- max_audio_duration (float, optional): Maximal audio file duration in seconds, if provided
- audio longer than this will be filtered out.
- shuffle_seed (int): can be used to further randomize
- load_wav (bool): if False, skip loading the wav but returns a tensor of 0
- with the expected segment_duration (which must be provided if load_wav is False).
- permutation_on_files (bool): only if `sample_on_weight` and `sample_on_duration`
- are False. Will ensure a permutation on files when going through the dataset.
- In that case the epoch number must be provided in order for the model
- to continue the permutation across epochs. In that case, it is assumed
- that `num_samples = total_batch_size * num_updates_per_epoch`, with
- `total_batch_size` the overall batch size accounting for all gpus.
- """
- def __init__(self,
- meta: tp.List[AudioMeta],
- segment_duration: tp.Optional[float] = None,
- shuffle: bool = True,
- num_samples: int = 10_000,
- sample_rate: int = 48_000,
- channels: int = 2,
- pad: bool = True,
- sample_on_duration: bool = True,
- sample_on_weight: bool = True,
- min_segment_ratio: float = 0.5,
- max_read_retry: int = 10,
- return_info: bool = False,
- min_audio_duration: tp.Optional[float] = None,
- max_audio_duration: tp.Optional[float] = None,
- shuffle_seed: int = 0,
- load_wav: bool = True,
- permutation_on_files: bool = False,
- ):
- assert len(meta) > 0, "No audio meta provided to AudioDataset. Please check loading of audio meta."
- assert segment_duration is None or segment_duration > 0
- assert segment_duration is None or min_segment_ratio >= 0
- self.segment_duration = segment_duration
- self.min_segment_ratio = min_segment_ratio
- self.max_audio_duration = max_audio_duration
- self.min_audio_duration = min_audio_duration
- if self.min_audio_duration is not None and self.max_audio_duration is not None:
- assert self.min_audio_duration <= self.max_audio_duration
- self.meta: tp.List[AudioMeta] = self._filter_duration(meta)
- assert len(self.meta) # Fail fast if all data has been filtered.
- self.total_duration = sum(d.duration for d in self.meta)
-
- if segment_duration is None:
- num_samples = len(self.meta)
- self.num_samples = num_samples
- self.shuffle = shuffle
- self.sample_rate = sample_rate
- self.channels = channels
- self.pad = pad
- self.sample_on_weight = sample_on_weight
- self.sample_on_duration = sample_on_duration
- self.sampling_probabilities = self._get_sampling_probabilities()
- self.max_read_retry = max_read_retry
- self.return_info = return_info
- self.shuffle_seed = shuffle_seed
- self.current_epoch: tp.Optional[int] = None
- self.load_wav = load_wav
- if not load_wav:
- assert segment_duration is not None
- self.permutation_on_files = permutation_on_files
- if permutation_on_files:
- assert not self.sample_on_duration
- assert not self.sample_on_weight
- assert self.shuffle
-
- def start_epoch(self, epoch: int):
- self.current_epoch = epoch
-
- def __len__(self):
- return self.num_samples
-
- def _get_sampling_probabilities(self, normalized: bool = True):
- """Return the sampling probabilities for each file inside `self.meta`."""
- scores: tp.List[float] = []
- for file_meta in self.meta:
- score = 1.
- if self.sample_on_weight and file_meta.weight is not None:
- score *= file_meta.weight
- if self.sample_on_duration:
- score *= file_meta.duration
- scores.append(score)
- probabilities = torch.tensor(scores)
- if normalized:
- probabilities /= probabilities.sum()
- return probabilities
-
- @staticmethod
- @lru_cache(16)
- def _get_file_permutation(num_files: int, permutation_index: int, base_seed: int):
- # Used to keep the most recent files permutation in memory implicitely.
- # will work unless someone is using a lot of Datasets in parallel.
- rng = torch.Generator()
- rng.manual_seed(base_seed + permutation_index)
- return torch.randperm(num_files, generator=rng)
-
- def sample_file(self, index: int, rng: torch.Generator) -> AudioMeta:
- """Sample a given file from `self.meta`. Can be overridden in subclasses.
- This is only called if `segment_duration` is not None.
-
- You must use the provided random number generator `rng` for reproducibility.
- You can further make use of the index accessed.
- """
- if self.permutation_on_files:
- assert self.current_epoch is not None
- total_index = self.current_epoch * len(self) + index
- permutation_index = total_index // len(self.meta)
- relative_index = total_index % len(self.meta)
- permutation = AudioDataset._get_file_permutation(
- len(self.meta), permutation_index, self.shuffle_seed)
- file_index = permutation[relative_index]
- return self.meta[file_index]
-
- if not self.sample_on_weight and not self.sample_on_duration:
- file_index = int(torch.randint(len(self.sampling_probabilities), (1,), generator=rng).item())
- else:
- file_index = int(torch.multinomial(self.sampling_probabilities, 1, generator=rng).item())
-
- return self.meta[file_index]
-
- def _audio_read(self, path: str, seek_time: float = 0, duration: float = -1):
- # Override this method in subclass if needed.
- if self.load_wav:
- return audio_read(path, seek_time, duration, pad=False)
- else:
- assert self.segment_duration is not None
- n_frames = int(self.sample_rate * self.segment_duration)
- return torch.zeros(self.channels, n_frames), self.sample_rate
-
- def __getitem__(self, index: int) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, SegmentInfo]]:
- if self.segment_duration is None:
- file_meta = self.meta[index]
- out, sr = audio_read(file_meta.path)
- out = convert_audio(out, sr, self.sample_rate, self.channels)
- n_frames = out.shape[-1]
- segment_info = SegmentInfo(file_meta, seek_time=0., n_frames=n_frames, total_frames=n_frames,
- sample_rate=self.sample_rate, channels=out.shape[0])
- else:
- rng = torch.Generator()
- if self.shuffle:
- # We use index, plus extra randomness, either totally random if we don't know the epoch.
- # otherwise we make use of the epoch number and optional shuffle_seed.
- if self.current_epoch is None:
- rng.manual_seed(index + self.num_samples * random.randint(0, 2**24))
- else:
- rng.manual_seed(index + self.num_samples * (self.current_epoch + self.shuffle_seed))
- else:
- # We only use index
- rng.manual_seed(index)
-
- for retry in range(self.max_read_retry):
- file_meta = self.sample_file(index, rng)
- # We add some variance in the file position even if audio file is smaller than segment
- # without ending up with empty segments
- max_seek = max(0, file_meta.duration - self.segment_duration * self.min_segment_ratio)
- seek_time = torch.rand(1, generator=rng).item() * max_seek
- try:
- out, sr = audio_read(file_meta.path, seek_time, self.segment_duration, pad=False)
- out = convert_audio(out, sr, self.sample_rate, self.channels)
- n_frames = out.shape[-1]
- target_frames = int(self.segment_duration * self.sample_rate)
- if self.pad:
- out = F.pad(out, (0, target_frames - n_frames))
- segment_info = SegmentInfo(file_meta, seek_time, n_frames=n_frames, total_frames=target_frames,
- sample_rate=self.sample_rate, channels=out.shape[0])
- except Exception as exc:
- logger.warning("Error opening file %s: %r", file_meta.path, exc)
- if retry == self.max_read_retry - 1:
- raise
- else:
- break
-
- if self.return_info:
- # Returns the wav and additional information on the wave segment
- return out, segment_info
- else:
- return out
-
- def collater(self, samples):
- """The collater function has to be provided to the dataloader
- if AudioDataset has return_info=True in order to properly collate
- the samples of a batch.
- """
- if self.segment_duration is None and len(samples) > 1:
- assert self.pad, "Must allow padding when batching examples of different durations."
-
- # In this case the audio reaching the collater is of variable length as segment_duration=None.
- to_pad = self.segment_duration is None and self.pad
- if to_pad:
- max_len = max([wav.shape[-1] for wav, _ in samples])
-
- def _pad_wav(wav):
- return F.pad(wav, (0, max_len - wav.shape[-1]))
-
- if self.return_info:
- if len(samples) > 0:
- assert len(samples[0]) == 2
- assert isinstance(samples[0][0], torch.Tensor)
- assert isinstance(samples[0][1], SegmentInfo)
-
- wavs = [wav for wav, _ in samples]
- segment_infos = [copy.deepcopy(info) for _, info in samples]
-
- if to_pad:
- # Each wav could be of a different duration as they are not segmented.
- for i in range(len(samples)):
- # Determines the total length of the signal with padding, so we update here as we pad.
- segment_infos[i].total_frames = max_len
- wavs[i] = _pad_wav(wavs[i])
-
- wav = torch.stack(wavs)
- return wav, segment_infos
- else:
- assert isinstance(samples[0], torch.Tensor)
- if to_pad:
- samples = [_pad_wav(s) for s in samples]
- return torch.stack(samples)
-
- def _filter_duration(self, meta: tp.List[AudioMeta]) -> tp.List[AudioMeta]:
- """Filters out audio files with audio durations that will not allow to sample examples from them."""
- orig_len = len(meta)
-
- # Filter data that is too short.
- if self.min_audio_duration is not None:
- meta = [m for m in meta if m.duration >= self.min_audio_duration]
-
- # Filter data that is too long.
- if self.max_audio_duration is not None:
- meta = [m for m in meta if m.duration <= self.max_audio_duration]
-
- filtered_len = len(meta)
- removed_percentage = 100*(1-float(filtered_len)/orig_len)
- msg = 'Removed %.2f percent of the data because it was too short or too long.' % removed_percentage
- if removed_percentage < 10:
- logging.debug(msg)
- else:
- logging.warning(msg)
- return meta
-
- @classmethod
- def from_meta(cls, root: tp.Union[str, Path], **kwargs):
- """Instantiate AudioDataset from a path to a directory containing a manifest as a jsonl file.
-
- Args:
- root (str or Path): Path to root folder containing audio files.
- kwargs: Additional keyword arguments for the AudioDataset.
- """
- root = Path(root)
- if root.is_dir():
- if (root / 'data.jsonl').exists():
- root = root / 'data.jsonl'
- elif (root / 'data.jsonl.gz').exists():
- root = root / 'data.jsonl.gz'
- else:
- raise ValueError("Don't know where to read metadata from in the dir. "
- "Expecting either a data.jsonl or data.jsonl.gz file but none found.")
- meta = load_audio_meta(root)
- return cls(meta, **kwargs)
-
- @classmethod
- def from_path(cls, root: tp.Union[str, Path], minimal_meta: bool = True,
- exts: tp.List[str] = DEFAULT_EXTS, **kwargs):
- """Instantiate AudioDataset from a path containing (possibly nested) audio files.
-
- Args:
- root (str or Path): Path to root folder containing audio files.
- minimal_meta (bool): Whether to only load minimal metadata or not.
- exts (list of str): Extensions for audio files.
- kwargs: Additional keyword arguments for the AudioDataset.
- """
- root = Path(root)
- if root.is_file():
- meta = load_audio_meta(root, resolve=True)
- else:
- meta = find_audio_files(root, exts, minimal=minimal_meta, resolve=True)
- return cls(meta, **kwargs)
-
-
-def main():
- logging.basicConfig(stream=sys.stderr, level=logging.INFO)
- parser = argparse.ArgumentParser(
- prog='audio_dataset',
- description='Generate .jsonl files by scanning a folder.')
- parser.add_argument('root', help='Root folder with all the audio files')
- parser.add_argument('output_meta_file',
- help='Output file to store the metadata, ')
- parser.add_argument('--complete',
- action='store_false', dest='minimal', default=True,
- help='Retrieve all metadata, even the one that are expansive '
- 'to compute (e.g. normalization).')
- parser.add_argument('--resolve',
- action='store_true', default=False,
- help='Resolve the paths to be absolute and with no symlinks.')
- parser.add_argument('--workers',
- default=10, type=int,
- help='Number of workers.')
- args = parser.parse_args()
- meta = find_audio_files(args.root, DEFAULT_EXTS, progress=True,
- resolve=args.resolve, minimal=args.minimal, workers=args.workers)
- save_audio_meta(args.output_meta_file, meta)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/AICopilot/Dropbox/README.md b/spaces/AICopilot/Dropbox/README.md
deleted file mode 100644
index 205a990da112f6cab63766a22b1a8e169af8da28..0000000000000000000000000000000000000000
--- a/spaces/AICopilot/Dropbox/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Dropbox
-emoji: 🌍
-colorFrom: pink
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.2.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/AIFILMS/StyleGANEX/configs/transforms_config.py b/spaces/AIFILMS/StyleGANEX/configs/transforms_config.py
deleted file mode 100644
index 0af0404f4f59c79e5f672205031470bdab013622..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/StyleGANEX/configs/transforms_config.py
+++ /dev/null
@@ -1,242 +0,0 @@
-from abc import abstractmethod
-import torchvision.transforms as transforms
-from datasets import augmentations
-
-
-class TransformsConfig(object):
-
- def __init__(self, opts):
- self.opts = opts
-
- @abstractmethod
- def get_transforms(self):
- pass
-
-
-class EncodeTransforms(TransformsConfig):
-
- def __init__(self, opts):
- super(EncodeTransforms, self).__init__(opts)
-
- def get_transforms(self):
- transforms_dict = {
- 'transform_gt_train': transforms.Compose([
- transforms.Resize((320, 320)),
- transforms.RandomHorizontalFlip(0.5),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_source': None,
- 'transform_test': transforms.Compose([
- transforms.Resize((320, 320)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_inference': transforms.Compose([
- transforms.Resize((320, 320)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
- }
- return transforms_dict
-
-
-class FrontalizationTransforms(TransformsConfig):
-
- def __init__(self, opts):
- super(FrontalizationTransforms, self).__init__(opts)
-
- def get_transforms(self):
- transforms_dict = {
- 'transform_gt_train': transforms.Compose([
- transforms.Resize((256, 256)),
- transforms.RandomHorizontalFlip(0.5),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_source': transforms.Compose([
- transforms.Resize((256, 256)),
- transforms.RandomHorizontalFlip(0.5),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_test': transforms.Compose([
- transforms.Resize((256, 256)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_inference': transforms.Compose([
- transforms.Resize((256, 256)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
- }
- return transforms_dict
-
-
-class SketchToImageTransforms(TransformsConfig):
-
- def __init__(self, opts):
- super(SketchToImageTransforms, self).__init__(opts)
-
- def get_transforms(self):
- transforms_dict = {
- 'transform_gt_train': transforms.Compose([
- transforms.Resize((320, 320)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_source': transforms.Compose([
- transforms.Resize((320, 320)),
- transforms.ToTensor()]),
- 'transform_test': transforms.Compose([
- transforms.Resize((320, 320)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_inference': transforms.Compose([
- transforms.Resize((320, 320)),
- transforms.ToTensor()]),
- }
- return transforms_dict
-
-
-class SegToImageTransforms(TransformsConfig):
-
- def __init__(self, opts):
- super(SegToImageTransforms, self).__init__(opts)
-
- def get_transforms(self):
- transforms_dict = {
- 'transform_gt_train': transforms.Compose([
- transforms.Resize((320, 320)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_source': transforms.Compose([
- transforms.Resize((320, 320)),
- augmentations.ToOneHot(self.opts.label_nc),
- transforms.ToTensor()]),
- 'transform_test': transforms.Compose([
- transforms.Resize((320, 320)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_inference': transforms.Compose([
- transforms.Resize((320, 320)),
- augmentations.ToOneHot(self.opts.label_nc),
- transforms.ToTensor()])
- }
- return transforms_dict
-
-
-class SuperResTransforms(TransformsConfig):
-
- def __init__(self, opts):
- super(SuperResTransforms, self).__init__(opts)
-
- def get_transforms(self):
- if self.opts.resize_factors is None:
- self.opts.resize_factors = '1,2,4,8,16,32'
- factors = [int(f) for f in self.opts.resize_factors.split(",")]
- print("Performing down-sampling with factors: {}".format(factors))
- transforms_dict = {
- 'transform_gt_train': transforms.Compose([
- transforms.Resize((1280, 1280)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_source': transforms.Compose([
- transforms.Resize((320, 320)),
- augmentations.BilinearResize(factors=factors),
- transforms.Resize((320, 320)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_test': transforms.Compose([
- transforms.Resize((1280, 1280)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_inference': transforms.Compose([
- transforms.Resize((320, 320)),
- augmentations.BilinearResize(factors=factors),
- transforms.Resize((320, 320)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
- }
- return transforms_dict
-
-
-class SuperResTransforms_320(TransformsConfig):
-
- def __init__(self, opts):
- super(SuperResTransforms_320, self).__init__(opts)
-
- def get_transforms(self):
- if self.opts.resize_factors is None:
- self.opts.resize_factors = '1,2,4,8,16,32'
- factors = [int(f) for f in self.opts.resize_factors.split(",")]
- print("Performing down-sampling with factors: {}".format(factors))
- transforms_dict = {
- 'transform_gt_train': transforms.Compose([
- transforms.Resize((320, 320)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_source': transforms.Compose([
- transforms.Resize((320, 320)),
- augmentations.BilinearResize(factors=factors),
- transforms.Resize((320, 320)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_test': transforms.Compose([
- transforms.Resize((320, 320)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_inference': transforms.Compose([
- transforms.Resize((320, 320)),
- augmentations.BilinearResize(factors=factors),
- transforms.Resize((320, 320)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
- }
- return transforms_dict
-
-
-class ToonifyTransforms(TransformsConfig):
-
- def __init__(self, opts):
- super(ToonifyTransforms, self).__init__(opts)
-
- def get_transforms(self):
- transforms_dict = {
- 'transform_gt_train': transforms.Compose([
- transforms.Resize((1024, 1024)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_source': transforms.Compose([
- transforms.Resize((256, 256)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_test': transforms.Compose([
- transforms.Resize((1024, 1024)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_inference': transforms.Compose([
- transforms.Resize((256, 256)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
- }
- return transforms_dict
-
-class EditingTransforms(TransformsConfig):
-
- def __init__(self, opts):
- super(EditingTransforms, self).__init__(opts)
-
- def get_transforms(self):
- transforms_dict = {
- 'transform_gt_train': transforms.Compose([
- transforms.Resize((1280, 1280)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_source': transforms.Compose([
- transforms.Resize((320, 320)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_test': transforms.Compose([
- transforms.Resize((1280, 1280)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_inference': transforms.Compose([
- transforms.Resize((320, 320)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
- }
- return transforms_dict
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/midas/base_model.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/midas/base_model.py
deleted file mode 100644
index 5cf430239b47ec5ec07531263f26f5c24a2311cd..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/midas/base_model.py
+++ /dev/null
@@ -1,16 +0,0 @@
-import torch
-
-
-class BaseModel(torch.nn.Module):
- def load(self, path):
- """Load model from file.
-
- Args:
- path (str): file path
- """
- parameters = torch.load(path, map_location=torch.device('cpu'))
-
- if "optimizer" in parameters:
- parameters = parameters["model"]
-
- self.load_state_dict(parameters)
diff --git a/spaces/AISuperheroes/07GR-NLP-Seq2Seq-AutoQA/README.md b/spaces/AISuperheroes/07GR-NLP-Seq2Seq-AutoQA/README.md
deleted file mode 100644
index 11a560d4ab5bb0e5ba99d748a2321ae7306aaa5d..0000000000000000000000000000000000000000
--- a/spaces/AISuperheroes/07GR-NLP-Seq2Seq-AutoQA/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 07GR NLP Seq2Seq AutoQA
-emoji: 😻
-colorFrom: gray
-colorTo: purple
-sdk: gradio
-sdk_version: 3.6
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Aaron299/bingo/Dockerfile b/spaces/Aaron299/bingo/Dockerfile
deleted file mode 100644
index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000
--- a/spaces/Aaron299/bingo/Dockerfile
+++ /dev/null
@@ -1,7 +0,0 @@
-FROM weaigc/bingo:latest
-
-ARG DEBIAN_FRONTEND=noninteractive
-
-ENV BING_HEADER ""
-
-CMD npm start
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/base_provider.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/base_provider.py
deleted file mode 100644
index 35764081ac16bf631166e208274ad58ba6547cbe..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/base_provider.py
+++ /dev/null
@@ -1,138 +0,0 @@
-from __future__ import annotations
-
-from asyncio import AbstractEventLoop
-from concurrent.futures import ThreadPoolExecutor
-from abc import ABC, abstractmethod
-
-from .helper import get_event_loop, get_cookies, format_prompt
-from ..typing import AsyncGenerator, CreateResult
-
-
-class BaseProvider(ABC):
- url: str
- working: bool = False
- needs_auth: bool = False
- supports_stream: bool = False
- supports_gpt_35_turbo: bool = False
- supports_gpt_4: bool = False
-
- @staticmethod
- @abstractmethod
- def create_completion(
- model: str,
- messages: list[dict[str, str]],
- stream: bool,
- **kwargs
- ) -> CreateResult:
- raise NotImplementedError()
-
- @classmethod
- async def create_async(
- cls,
- model: str,
- messages: list[dict[str, str]],
- *,
- loop: AbstractEventLoop = None,
- executor: ThreadPoolExecutor = None,
- **kwargs
- ) -> str:
- if not loop:
- loop = get_event_loop()
-
- def create_func() -> str:
- return "".join(cls.create_completion(
- model,
- messages,
- False,
- **kwargs
- ))
-
- return await loop.run_in_executor(
- executor,
- create_func
- )
-
- @classmethod
- @property
- def params(cls) -> str:
- params = [
- ("model", "str"),
- ("messages", "list[dict[str, str]]"),
- ("stream", "bool"),
- ]
- param = ", ".join([": ".join(p) for p in params])
- return f"g4f.provider.{cls.__name__} supports: ({param})"
-
-
-class AsyncProvider(BaseProvider):
- @classmethod
- def create_completion(
- cls,
- model: str,
- messages: list[dict[str, str]],
- stream: bool = False,
- **kwargs
- ) -> CreateResult:
- loop = get_event_loop()
- coro = cls.create_async(model, messages, **kwargs)
- yield loop.run_until_complete(coro)
-
- @staticmethod
- @abstractmethod
- async def create_async(
- model: str,
- messages: list[dict[str, str]],
- **kwargs
- ) -> str:
- raise NotImplementedError()
-
-
-class AsyncGeneratorProvider(AsyncProvider):
- supports_stream = True
-
- @classmethod
- def create_completion(
- cls,
- model: str,
- messages: list[dict[str, str]],
- stream: bool = True,
- **kwargs
- ) -> CreateResult:
- loop = get_event_loop()
- generator = cls.create_async_generator(
- model,
- messages,
- stream=stream,
- **kwargs
- )
- gen = generator.__aiter__()
- while True:
- try:
- yield loop.run_until_complete(gen.__anext__())
- except StopAsyncIteration:
- break
-
- @classmethod
- async def create_async(
- cls,
- model: str,
- messages: list[dict[str, str]],
- **kwargs
- ) -> str:
- return "".join([
- chunk async for chunk in cls.create_async_generator(
- model,
- messages,
- stream=False,
- **kwargs
- )
- ])
-
- @staticmethod
- @abstractmethod
- def create_async_generator(
- model: str,
- messages: list[dict[str, str]],
- **kwargs
- ) -> AsyncGenerator:
- raise NotImplementedError()
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/canvasinput.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/canvasinput.js
deleted file mode 100644
index 721c242b4fc04c350e8529377da255e11ce5bcf7..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/canvasinput.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import CanvasInput from './gameobjects/dynamictext/canvasinput/CanvasInput.js';
-export default CanvasInput;
\ No newline at end of file
diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/training_scripts/sg2/training/networks.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/training_scripts/sg2/training/networks.py
deleted file mode 100644
index 291d1f6d157aeab10896bc106c15fe4d03fcb145..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/training_scripts/sg2/training/networks.py
+++ /dev/null
@@ -1,966 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import numpy as np
-import torch
-from torch_utils import misc
-from torch_utils import persistence
-from torch_utils.ops import conv2d_resample
-from torch_utils.ops import upfirdn2d
-from torch_utils.ops import bias_act
-from torch_utils.ops import fma
-
-# ----------------------------------------------------------------------------
-
-
-@misc.profiled_function
-def normalize_2nd_moment(x, dim=1, eps=1e-8):
- return x * (x.square().mean(dim=dim, keepdim=True) + eps).rsqrt()
-
-# ----------------------------------------------------------------------------
-
-
-@misc.profiled_function
-def modulated_conv2d(
- # Input tensor of shape [batch_size, in_channels, in_height, in_width].
- x,
- # Weight tensor of shape [out_channels, in_channels, kernel_height, kernel_width].
- weight,
- # Modulation coefficients of shape [batch_size, in_channels].
- styles,
- noise=None, # Optional noise tensor to add to the output activations.
- up=1, # Integer upsampling factor.
- down=1, # Integer downsampling factor.
- padding=0, # Padding with respect to the upsampled image.
- # Low-pass filter to apply when resampling activations. Must be prepared beforehand by calling upfirdn2d.setup_filter().
- resample_filter=None,
- demodulate=True, # Apply weight demodulation?
- # False = convolution, True = correlation (matches torch.nn.functional.conv2d).
- flip_weight=True,
- # Perform modulation, convolution, and demodulation as a single fused operation?
- fused_modconv=True,
-):
- batch_size = x.shape[0]
- out_channels, in_channels, kh, kw = weight.shape
- misc.assert_shape(weight, [out_channels, in_channels, kh, kw]) # [OIkk]
- misc.assert_shape(x, [batch_size, in_channels, None, None]) # [NIHW]
- misc.assert_shape(styles, [batch_size, in_channels]) # [NI]
-
- # Pre-normalize inputs to avoid FP16 overflow.
- if x.dtype == torch.float16 and demodulate:
- weight = weight * (1 / np.sqrt(in_channels * kh * kw) /
- weight.norm(float('inf'), dim=[1, 2, 3], keepdim=True)) # max_Ikk
- styles = styles / \
- styles.norm(float('inf'), dim=1, keepdim=True) # max_I
-
- # Calculate per-sample weights and demodulation coefficients.
- w = None
- dcoefs = None
- if demodulate or fused_modconv:
- w = weight.unsqueeze(0) # [NOIkk]
- w = w * styles.reshape(batch_size, 1, -1, 1, 1) # [NOIkk]
- if demodulate:
- dcoefs = (w.square().sum(dim=[2, 3, 4]) + 1e-8).rsqrt() # [NO]
- if demodulate and fused_modconv:
- w = w * dcoefs.reshape(batch_size, -1, 1, 1, 1) # [NOIkk]
-
- # Execute by scaling the activations before and after the convolution.
- if not fused_modconv:
- x = x * styles.to(x.dtype).reshape(batch_size, -1, 1, 1)
- x = conv2d_resample.conv2d_resample(x=x, w=weight.to(
- x.dtype), f=resample_filter, up=up, down=down, padding=padding, flip_weight=flip_weight)
- if demodulate and noise is not None:
- x = fma.fma(x, dcoefs.to(x.dtype).reshape(
- batch_size, -1, 1, 1), noise.to(x.dtype))
- elif demodulate:
- x = x * dcoefs.to(x.dtype).reshape(batch_size, -1, 1, 1)
- elif noise is not None:
- x = x.add_(noise.to(x.dtype))
- return x
-
- # Execute as one fused op using grouped convolution.
- with misc.suppress_tracer_warnings(): # this value will be treated as a constant
- batch_size = int(batch_size)
- misc.assert_shape(x, [batch_size, in_channels, None, None])
- x = x.reshape(1, -1, *x.shape[2:])
- w = w.reshape(-1, in_channels, kh, kw)
- x = conv2d_resample.conv2d_resample(x=x, w=w.to(
- x.dtype), f=resample_filter, up=up, down=down, padding=padding, groups=batch_size, flip_weight=flip_weight)
- x = x.reshape(batch_size, -1, *x.shape[2:])
- if noise is not None:
- x = x.add_(noise)
- return x
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class FullyConnectedLayer(torch.nn.Module):
- def __init__(self,
- in_features, # Number of input features.
- out_features, # Number of output features.
- bias=True, # Apply additive bias before the activation function?
- # Activation function: 'relu', 'lrelu', etc.
- activation='linear',
- lr_multiplier=1, # Learning rate multiplier.
- bias_init=0, # Initial value for the additive bias.
- ):
- super().__init__()
- self.activation = activation
- self.weight = torch.nn.Parameter(torch.randn(
- [out_features, in_features]) / lr_multiplier)
- self.bias = torch.nn.Parameter(torch.full(
- [out_features], np.float32(bias_init))) if bias else None
- self.weight_gain = lr_multiplier / np.sqrt(in_features)
- self.bias_gain = lr_multiplier
-
- def forward(self, x):
- w = self.weight.to(x.dtype) * self.weight_gain
- b = self.bias
- if b is not None:
- b = b.to(x.dtype)
- if self.bias_gain != 1:
- b = b * self.bias_gain
-
- if self.activation == 'linear' and b is not None:
- x = torch.addmm(b.unsqueeze(0), x, w.t())
- else:
- x = x.matmul(w.t())
- x = bias_act.bias_act(x, b, act=self.activation)
- return x
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class Conv2dLayer(torch.nn.Module):
- def __init__(self,
- in_channels, # Number of input channels.
- out_channels, # Number of output channels.
- # Width and height of the convolution kernel.
- kernel_size,
- bias=True, # Apply additive bias before the activation function?
- # Activation function: 'relu', 'lrelu', etc.
- activation='linear',
- up=1, # Integer upsampling factor.
- down=1, # Integer downsampling factor.
- # Low-pass filter to apply when resampling activations.
- resample_filter=[1, 3, 3, 1],
- # Clamp the output to +-X, None = disable clamping.
- conv_clamp=None,
- channels_last=False, # Expect the input to have memory_format=channels_last?
- trainable=True, # Update the weights of this layer during training?
- ):
- super().__init__()
- self.activation = activation
- self.up = up
- self.down = down
- self.conv_clamp = conv_clamp
- self.register_buffer(
- 'resample_filter', upfirdn2d.setup_filter(resample_filter))
- self.padding = kernel_size // 2
- self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2))
- self.act_gain = bias_act.activation_funcs[activation].def_gain
-
- memory_format = torch.channels_last if channels_last else torch.contiguous_format
- weight = torch.randn([out_channels, in_channels, kernel_size, kernel_size]).to(
- memory_format=memory_format)
- bias = torch.zeros([out_channels]) if bias else None
- if trainable:
- self.weight = torch.nn.Parameter(weight)
- self.bias = torch.nn.Parameter(bias) if bias is not None else None
- else:
- self.register_buffer('weight', weight)
- if bias is not None:
- self.register_buffer('bias', bias)
- else:
- self.bias = None
-
- def forward(self, x, gain=1):
- w = self.weight * self.weight_gain
- b = self.bias.to(x.dtype) if self.bias is not None else None
- flip_weight = (self.up == 1) # slightly faster
- x = conv2d_resample.conv2d_resample(x=x, w=w.to(
- x.dtype), f=self.resample_filter, up=self.up, down=self.down, padding=self.padding, flip_weight=flip_weight)
-
- act_gain = self.act_gain * gain
- act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None
- x = bias_act.bias_act(x, b, act=self.activation,
- gain=act_gain, clamp=act_clamp)
- return x
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class MappingNetwork(torch.nn.Module):
- def __init__(self,
- # Input latent (Z) dimensionality, 0 = no latent.
- z_dim,
- # Conditioning label (C) dimensionality, 0 = no label.
- c_dim,
- # Intermediate latent (W) dimensionality.
- w_dim,
- # Number of intermediate latents to output, None = do not broadcast.
- num_ws,
- num_layers=8, # Number of mapping layers.
- # Label embedding dimensionality, None = same as w_dim.
- embed_features=None,
- # Number of intermediate features in the mapping layers, None = same as w_dim.
- layer_features=None,
- # Activation function: 'relu', 'lrelu', etc.
- activation='lrelu',
- # Learning rate multiplier for the mapping layers.
- lr_multiplier=0.01,
- # Decay for tracking the moving average of W during training, None = do not track.
- w_avg_beta=0.995,
- ):
- super().__init__()
- self.z_dim = z_dim
- self.c_dim = c_dim
- self.w_dim = w_dim
- self.num_ws = num_ws
- self.num_layers = num_layers
- self.w_avg_beta = w_avg_beta
-
- if embed_features is None:
- embed_features = w_dim
- if c_dim == 0:
- embed_features = 0
- if layer_features is None:
- layer_features = w_dim
- features_list = [z_dim + embed_features] + \
- [layer_features] * (num_layers - 1) + [w_dim]
-
- if c_dim > 0:
- self.embed = FullyConnectedLayer(c_dim, embed_features)
- for idx in range(num_layers):
- in_features = features_list[idx]
- out_features = features_list[idx + 1]
- layer = FullyConnectedLayer(
- in_features, out_features, activation=activation, lr_multiplier=lr_multiplier)
- setattr(self, f'fc{idx}', layer)
-
- if num_ws is not None and w_avg_beta is not None:
- self.register_buffer('w_avg', torch.zeros([w_dim]))
-
- def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, skip_w_avg_update=False):
- # Embed, normalize, and concat inputs.
- x = None
- with torch.autograd.profiler.record_function('input'):
- if self.z_dim > 0:
- misc.assert_shape(z, [None, self.z_dim])
- x = normalize_2nd_moment(z.to(torch.float32))
- if self.c_dim > 0:
- misc.assert_shape(c, [None, self.c_dim])
- y = normalize_2nd_moment(self.embed(c.to(torch.float32)))
- x = torch.cat([x, y], dim=1) if x is not None else y
-
- # Main layers.
- for idx in range(self.num_layers):
- layer = getattr(self, f'fc{idx}')
- x = layer(x)
-
- # Update moving average of W.
- if self.w_avg_beta is not None and self.training and not skip_w_avg_update:
- with torch.autograd.profiler.record_function('update_w_avg'):
- self.w_avg.copy_(x.detach().mean(
- dim=0).lerp(self.w_avg, self.w_avg_beta))
-
- # Broadcast.
- if self.num_ws is not None:
- with torch.autograd.profiler.record_function('broadcast'):
- x = x.unsqueeze(1).repeat([1, self.num_ws, 1])
-
- # Apply truncation.
- if truncation_psi != 1:
- with torch.autograd.profiler.record_function('truncate'):
- assert self.w_avg_beta is not None
- if self.num_ws is None or truncation_cutoff is None:
- x = self.w_avg.lerp(x, truncation_psi)
- else:
- x[:, :truncation_cutoff] = self.w_avg.lerp(
- x[:, :truncation_cutoff], truncation_psi)
- return x
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class SynthesisLayer(torch.nn.Module):
- def __init__(self,
- in_channels, # Number of input channels.
- out_channels, # Number of output channels.
- # Intermediate latent (W) dimensionality.
- w_dim,
- resolution, # Resolution of this layer.
- kernel_size=3, # Convolution kernel size.
- up=1, # Integer upsampling factor.
- use_noise=True, # Enable noise input?
- # Activation function: 'relu', 'lrelu', etc.
- activation='lrelu',
- # Low-pass filter to apply when resampling activations.
- resample_filter=[1, 3, 3, 1],
- # Clamp the output of convolution layers to +-X, None = disable clamping.
- conv_clamp=None,
- channels_last=False, # Use channels_last format for the weights?
- square=False, # default if for rectangle images
- ):
- super().__init__()
- self.resolution = resolution
- self.up = up
- self.use_noise = use_noise
- self.activation = activation
- self.conv_clamp = conv_clamp
- self.register_buffer(
- 'resample_filter', upfirdn2d.setup_filter(resample_filter))
- self.padding = kernel_size // 2
- self.act_gain = bias_act.activation_funcs[activation].def_gain
- self.square = square
-
- self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1)
- memory_format = torch.channels_last if channels_last else torch.contiguous_format
- self.weight = torch.nn.Parameter(torch.randn(
- [out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format))
- if use_noise:
- if self.square:
- self.register_buffer(
- 'noise_const', torch.randn([resolution, resolution]))
- else:
- self.register_buffer('noise_const', torch.randn(
- [resolution, resolution // 2]))
- self.noise_strength = torch.nn.Parameter(torch.zeros([]))
- self.bias = torch.nn.Parameter(torch.zeros([out_channels]))
-
- def forward(self, x, w, noise_mode='random', fused_modconv=True, gain=1):
- assert noise_mode in ['random', 'const', 'none']
- in_resolution = self.resolution // self.up
- if self.square:
- misc.assert_shape(
- x, [None, self.weight.shape[1], in_resolution, in_resolution])
- else:
- misc.assert_shape(
- x, [None, self.weight.shape[1], in_resolution, in_resolution // 2])
- styles = self.affine(w)
-
- noise = None
- if self.use_noise and noise_mode == 'random':
- if self.square:
- noise = torch.randn(
- [x.shape[0], 1, self.resolution, self.resolution], device=x.device) * self.noise_strength
- else:
- noise = torch.randn(
- [x.shape[0], 1, self.resolution, self.resolution // 2], device=x.device) * self.noise_strength
- if self.use_noise and noise_mode == 'const':
- noise = self.noise_const * self.noise_strength
-
- flip_weight = (self.up == 1) # slightly faster
- x = modulated_conv2d(x=x, weight=self.weight, styles=styles, noise=noise, up=self.up,
- padding=self.padding, resample_filter=self.resample_filter, flip_weight=flip_weight, fused_modconv=fused_modconv)
-
- act_gain = self.act_gain * gain
- act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None
- x = bias_act.bias_act(x, self.bias.to(
- x.dtype), act=self.activation, gain=act_gain, clamp=act_clamp)
- return x
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class ToRGBLayer(torch.nn.Module):
- def __init__(self, in_channels, out_channels, w_dim, kernel_size=1, conv_clamp=None, channels_last=False):
- super().__init__()
- self.conv_clamp = conv_clamp
- self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1)
- memory_format = torch.channels_last if channels_last else torch.contiguous_format
- self.weight = torch.nn.Parameter(torch.randn(
- [out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format))
- self.bias = torch.nn.Parameter(torch.zeros([out_channels]))
- self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2))
-
- def forward(self, x, w, fused_modconv=True):
- styles = self.affine(w) * self.weight_gain
- x = modulated_conv2d(x=x, weight=self.weight, styles=styles,
- demodulate=False, fused_modconv=fused_modconv)
- x = bias_act.bias_act(x, self.bias.to(x.dtype), clamp=self.conv_clamp)
- return x
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class SynthesisBlock(torch.nn.Module):
- def __init__(self,
- # Number of input channels, 0 = first block.
- in_channels,
- # Number of output channels.
- out_channels,
- # Intermediate latent (W) dimensionality.
- w_dim,
- # Resolution of this block.
- resolution,
- # Number of output color channels.
- img_channels,
- is_last, # Is this the last block?
- # Architecture: 'orig', 'skip', 'resnet'.
- architecture='skip',
- # Low-pass filter to apply when resampling activations.
- resample_filter=[1, 3, 3, 1],
- # Clamp the output of convolution layers to +-X, None = disable clamping.
- conv_clamp=None,
- use_fp16=False, # Use FP16 for this block?
- fp16_channels_last=False, # Use channels-last memory format with FP16?
- square=False, # default is for rectangle images
- # Arguments for SynthesisLayer.
- **layer_kwargs,
- ):
- assert architecture in ['orig', 'skip', 'resnet']
- super().__init__()
- self.in_channels = in_channels
- self.w_dim = w_dim
- self.resolution = resolution
- self.img_channels = img_channels
- self.is_last = is_last
- self.architecture = architecture
- self.use_fp16 = use_fp16
- self.channels_last = (use_fp16 and fp16_channels_last)
- self.register_buffer(
- 'resample_filter', upfirdn2d.setup_filter(resample_filter))
- self.num_conv = 0
- self.num_torgb = 0
- self.square = square
-
- if in_channels == 0:
- if self.square:
- self.const = torch.nn.Parameter(torch.randn(
- [out_channels, resolution, resolution]))
- else: # rectangle
- self.const = torch.nn.Parameter(torch.randn(
- [out_channels, resolution, resolution // 2]))
-
- if in_channels != 0:
- self.conv0 = SynthesisLayer(in_channels, out_channels, w_dim=w_dim, resolution=resolution, up=2,
- resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last, square=square, **layer_kwargs)
- self.num_conv += 1
-
- self.conv1 = SynthesisLayer(out_channels, out_channels, w_dim=w_dim, resolution=resolution,
- conv_clamp=conv_clamp, channels_last=self.channels_last, square=square, **layer_kwargs)
- self.num_conv += 1
-
- if is_last or architecture == 'skip':
- self.torgb = ToRGBLayer(out_channels, img_channels, w_dim=w_dim,
- conv_clamp=conv_clamp, channels_last=self.channels_last)
- self.num_torgb += 1
-
- if in_channels != 0 and architecture == 'resnet':
- self.skip = Conv2dLayer(in_channels, out_channels, kernel_size=1, bias=False, up=2,
- resample_filter=resample_filter, channels_last=self.channels_last)
-
- def forward(self, x, img, ws, force_fp32=False, fused_modconv=None, **layer_kwargs):
- misc.assert_shape(
- ws, [None, self.num_conv + self.num_torgb, self.w_dim])
- w_iter = iter(ws.unbind(dim=1))
- dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32
- memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format
- if fused_modconv is None:
- with misc.suppress_tracer_warnings(): # this value will be treated as a constant
- fused_modconv = (not self.training) and (
- dtype == torch.float32 or int(x.shape[0]) == 1)
-
- # Input.
- if self.in_channels == 0:
- x = self.const.to(dtype=dtype, memory_format=memory_format)
- x = x.unsqueeze(0).repeat([ws.shape[0], 1, 1, 1])
- else:
- if self.square:
- misc.assert_shape(
- x, [None, self.in_channels, self.resolution // 2, self.resolution // 2])
- else: # rectangle
- misc.assert_shape(
- x, [None, self.in_channels, self.resolution // 2, self.resolution // 4])
- x = x.to(dtype=dtype, memory_format=memory_format)
-
- # Main layers.
- if self.in_channels == 0:
- x = self.conv1(x, next(w_iter),
- fused_modconv=fused_modconv, **layer_kwargs)
- elif self.architecture == 'resnet':
- y = self.skip(x, gain=np.sqrt(0.5))
- x = self.conv0(x, next(w_iter),
- fused_modconv=fused_modconv, **layer_kwargs)
- x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv,
- gain=np.sqrt(0.5), **layer_kwargs)
- x = y.add_(x)
- else:
- x = self.conv0(x, next(w_iter),
- fused_modconv=fused_modconv, **layer_kwargs)
- x = self.conv1(x, next(w_iter),
- fused_modconv=fused_modconv, **layer_kwargs)
-
- # ToRGB.
- if img is not None:
- if self.square:
- misc.assert_shape(
- img, [None, self.img_channels, self.resolution // 2, self.resolution // 2])
- else:
- misc.assert_shape(
- img, [None, self.img_channels, self.resolution // 2, self.resolution // 4])
- img = upfirdn2d.upsample2d(img, self.resample_filter)
- if self.is_last or self.architecture == 'skip':
- y = self.torgb(x, next(w_iter), fused_modconv=fused_modconv)
- y = y.to(dtype=torch.float32,
- memory_format=torch.contiguous_format)
- img = img.add_(y) if img is not None else y
-
- assert x.dtype == dtype
- assert img is None or img.dtype == torch.float32
- return x, img
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class SynthesisNetwork(torch.nn.Module):
- def __init__(self,
- # Intermediate latent (W) dimensionality.
- w_dim,
- img_resolution, # Output image resolution.
- img_channels, # Number of color channels.
- square,
- # Overall multiplier for the number of channels.
- channel_base=32768,
- # Maximum number of channels in any layer.
- channel_max=512,
- # Use FP16 for the N highest resolutions.
- num_fp16_res=0,
- **block_kwargs, # Arguments for SynthesisBlock.
- ):
- assert img_resolution >= 4 and img_resolution & (
- img_resolution - 1) == 0
- super().__init__()
- self.w_dim = w_dim
- self.img_resolution = img_resolution
- self.img_resolution_log2 = int(np.log2(img_resolution))
- self.img_channels = img_channels
- self.square = square
- self.block_resolutions = [
- 2 ** i for i in range(2, self.img_resolution_log2 + 1)]
- channels_dict = {res: min(channel_base // res, channel_max)
- for res in self.block_resolutions}
- fp16_resolution = max(
- 2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8)
-
- self.num_ws = 0
- for res in self.block_resolutions:
- in_channels = channels_dict[res // 2] if res > 4 else 0
- out_channels = channels_dict[res]
- use_fp16 = (res >= fp16_resolution)
- is_last = (res == self.img_resolution)
- block = SynthesisBlock(in_channels, out_channels, w_dim=w_dim, resolution=res,
- img_channels=img_channels, is_last=is_last, use_fp16=use_fp16, square=square, **block_kwargs)
- self.num_ws += block.num_conv
- if is_last:
- self.num_ws += block.num_torgb
- setattr(self, f'b{res}', block)
-
- def forward(self, ws, return_feature=False, **block_kwargs):
- block_ws = []
- features = []
- with torch.autograd.profiler.record_function('split_ws'):
- misc.assert_shape(ws, [None, self.num_ws, self.w_dim])
- ws = ws.to(torch.float32)
- w_idx = 0
- for res in self.block_resolutions:
- block = getattr(self, f'b{res}')
- block_ws.append(
- ws.narrow(1, w_idx, block.num_conv + block.num_torgb))
- w_idx += block.num_conv
-
- x = img = None
- for res, cur_ws in zip(self.block_resolutions, block_ws):
- block = getattr(self, f'b{res}')
- x, img = block(x, img, cur_ws, **block_kwargs)
- features.append(x)
- if return_feature:
- return img, features
- else:
- return img
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class Generator(torch.nn.Module):
- def __init__(self,
- z_dim, # Input latent (Z) dimensionality.
- # Conditioning label (C) dimensionality.
- c_dim,
- # Intermediate latent (W) dimensionality.
- w_dim,
- img_resolution, # Output resolution.
- square,
- img_channels, # Number of output color channels.
- mapping_kwargs={}, # Arguments for MappingNetwork.
- synthesis_kwargs={}, # Arguments for SynthesisNetwork.
- padding=False
- ):
- super().__init__()
- self.z_dim = z_dim
- self.c_dim = c_dim
- self.w_dim = w_dim
- self.square = square
- self.img_resolution = img_resolution
- self.img_channels = img_channels
- self.padding = padding
- self.synthesis = SynthesisNetwork(
- w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, square=square, **synthesis_kwargs)
- self.num_ws = self.synthesis.num_ws
- self.mapping = MappingNetwork(
- z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs)
-
- def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, input_is_w=False, return_feature=False, **synthesis_kwargs):
- if input_is_w:
- ws = z
- if ws.dim() == 2:
- ws = ws.unsqueeze(1).repeat([1, self.mapping.num_ws, 1])
- else:
- ws = self.mapping(z, c, truncation_psi=truncation_psi,
- truncation_cutoff=truncation_cutoff)
- img = self.synthesis(
- ws, return_feature=return_feature, **synthesis_kwargs)
- if return_feature:
- img, feature = img
- if self.padding:
- pad = (img.size(2) - img.size(3)) // 2
- img = torch.nn.functional.pad(img, (pad, pad), "constant", 1)
- if return_feature:
- for i, feat in enumerate(feature):
- pad = (feat.size(2) - feat.size(3)) // 2
- feature[i] = torch.nn.functional.pad(
- feat, (pad, pad), "constant", 0)
- if return_feature:
- return img, feature
- else:
- return img
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class DiscriminatorBlock(torch.nn.Module):
- def __init__(self,
- # Number of input channels, 0 = first block.
- in_channels,
- # Number of intermediate channels.
- tmp_channels,
- # Number of output channels.
- out_channels,
- # Resolution of this block.
- resolution,
- # Number of input color channels.
- img_channels,
- # Index of the first layer.
- first_layer_idx,
- # Architecture: 'orig', 'skip', 'resnet'.
- architecture='resnet',
- # Activation function: 'relu', 'lrelu', etc.
- activation='lrelu',
- # Low-pass filter to apply when resampling activations.
- resample_filter=[1, 3, 3, 1],
- # Clamp the output of convolution layers to +-X, None = disable clamping.
- conv_clamp=None,
- use_fp16=False, # Use FP16 for this block?
- fp16_channels_last=False, # Use channels-last memory format with FP16?
- # Freeze-D: Number of layers to freeze.
- freeze_layers=0,
- square=False,
- ):
- assert in_channels in [0, tmp_channels]
- assert architecture in ['orig', 'skip', 'resnet']
- super().__init__()
- self.in_channels = in_channels
- self.resolution = resolution
- self.img_channels = img_channels
- self.first_layer_idx = first_layer_idx
- self.architecture = architecture
- self.use_fp16 = use_fp16
- self.channels_last = (use_fp16 and fp16_channels_last)
- self.register_buffer(
- 'resample_filter', upfirdn2d.setup_filter(resample_filter))
- self.square = square
-
- self.num_layers = 0
-
- def trainable_gen():
- while True:
- layer_idx = self.first_layer_idx + self.num_layers
- trainable = (layer_idx >= freeze_layers)
- self.num_layers += 1
- yield trainable
- trainable_iter = trainable_gen()
-
- if in_channels == 0 or architecture == 'skip':
- self.fromrgb = Conv2dLayer(img_channels, tmp_channels, kernel_size=1, activation=activation,
- trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last)
-
- self.conv0 = Conv2dLayer(tmp_channels, tmp_channels, kernel_size=3, activation=activation,
- trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last)
-
- self.conv1 = Conv2dLayer(tmp_channels, out_channels, kernel_size=3, activation=activation, down=2,
- trainable=next(trainable_iter), resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last)
-
- if architecture == 'resnet':
- self.skip = Conv2dLayer(tmp_channels, out_channels, kernel_size=1, bias=False, down=2,
- trainable=next(trainable_iter), resample_filter=resample_filter, channels_last=self.channels_last)
-
- def forward(self, x, img, force_fp32=False):
- dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32
- memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format
-
- # Input.
- if x is not None:
- if self.square:
- misc.assert_shape(
- x, [None, self.in_channels, self.resolution, self.resolution])
- else:
- misc.assert_shape(
- x, [None, self.in_channels, self.resolution, self.resolution // 2])
- x = x.to(dtype=dtype, memory_format=memory_format)
-
- # FromRGB.
- if self.in_channels == 0 or self.architecture == 'skip':
- if self.square:
- misc.assert_shape(
- img, [None, self.img_channels, self.resolution, self.resolution])
- else:
- misc.assert_shape(
- img, [None, self.img_channels, self.resolution, self.resolution // 2])
- img = img.to(dtype=dtype, memory_format=memory_format)
- y = self.fromrgb(img)
- x = x + y if x is not None else y
- img = upfirdn2d.downsample2d(
- img, self.resample_filter) if self.architecture == 'skip' else None
-
- # Main layers.
- if self.architecture == 'resnet':
- y = self.skip(x, gain=np.sqrt(0.5))
- x = self.conv0(x)
- x = self.conv1(x, gain=np.sqrt(0.5))
- x = y.add_(x)
- else:
- x = self.conv0(x)
- x = self.conv1(x)
-
- assert x.dtype == dtype
- return x, img
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class MinibatchStdLayer(torch.nn.Module):
- def __init__(self, group_size, num_channels=1):
- super().__init__()
- self.group_size = group_size
- self.num_channels = num_channels
-
- def forward(self, x):
- N, C, H, W = x.shape
- with misc.suppress_tracer_warnings(): # as_tensor results are registered as constants
- G = torch.min(torch.as_tensor(self.group_size), torch.as_tensor(
- N)) if self.group_size is not None else N
- F = self.num_channels
- c = C // F
-
- # [GnFcHW] Split minibatch N into n groups of size G, and channels C into F groups of size c.
- y = x.reshape(G, -1, F, c, H, W)
- # [GnFcHW] Subtract mean over group.
- y = y - y.mean(dim=0)
- # [nFcHW] Calc variance over group.
- y = y.square().mean(dim=0)
- y = (y + 1e-8).sqrt() # [nFcHW] Calc stddev over group.
- # [nF] Take average over channels and pixels.
- y = y.mean(dim=[2, 3, 4])
- y = y.reshape(-1, F, 1, 1) # [nF11] Add missing dimensions.
- # [NFHW] Replicate over group and pixels.
- y = y.repeat(G, 1, H, W)
- # [NCHW] Append to input as new channels.
- x = torch.cat([x, y], dim=1)
- return x
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class DiscriminatorEpilogue(torch.nn.Module):
- def __init__(self,
- in_channels, # Number of input channels.
- # Dimensionality of mapped conditioning label, 0 = no label.
- cmap_dim,
- resolution, # Resolution of this block.
- # Number of input color channels.
- img_channels,
- # Architecture: 'orig', 'skip', 'resnet'.
- architecture='resnet',
- # Group size for the minibatch standard deviation layer, None = entire minibatch.
- mbstd_group_size=4,
- # Number of features for the minibatch standard deviation layer, 0 = disable.
- mbstd_num_channels=1,
- # Activation function: 'relu', 'lrelu', etc.
- activation='lrelu',
- # Clamp the output of convolution layers to +-X, None = disable clamping.
- conv_clamp=None,
- square=False,
- ):
- assert architecture in ['orig', 'skip', 'resnet']
- super().__init__()
- self.in_channels = in_channels
- self.cmap_dim = cmap_dim
- self.resolution = resolution
- self.img_channels = img_channels
- self.architecture = architecture
- self.square = square
-
- if architecture == 'skip':
- self.fromrgb = Conv2dLayer(
- img_channels, in_channels, kernel_size=1, activation=activation)
- self.mbstd = MinibatchStdLayer(
- group_size=mbstd_group_size, num_channels=mbstd_num_channels) if mbstd_num_channels > 0 else None
- self.conv = Conv2dLayer(in_channels + mbstd_num_channels, in_channels,
- kernel_size=3, activation=activation, conv_clamp=conv_clamp)
-
- if self.square:
- self.fc = FullyConnectedLayer(
- in_channels * (resolution ** 2), in_channels, activation=activation)
- else:
- self.fc = FullyConnectedLayer(
- in_channels * (resolution ** 2 // 2), in_channels, activation=activation)
-
- self.out = FullyConnectedLayer(
- in_channels, 1 if cmap_dim == 0 else cmap_dim)
-
- def forward(self, x, img, cmap, force_fp32=False):
- if self.square:
- misc.assert_shape(x, [None, self.in_channels,
- self.resolution, self.resolution])
- else:
- misc.assert_shape(
- x, [None, self.in_channels, self.resolution, self.resolution // 2]) # [NCHW]
- _ = force_fp32 # unused
- dtype = torch.float32
- memory_format = torch.contiguous_format
-
- # FromRGB.
- x = x.to(dtype=dtype, memory_format=memory_format)
- if self.architecture == 'skip':
- if self.square:
- misc.assert_shape(
- img, [None, self.img_channels, self.resolution, self.resolution])
- else:
- misc.assert_shape(
- img, [None, self.img_channels, self.resolution, self.resolution // 2])
- img = img.to(dtype=dtype, memory_format=memory_format)
- x = x + self.fromrgb(img)
-
- # Main layers.
- if self.mbstd is not None:
- x = self.mbstd(x)
- x = self.conv(x)
- x = self.fc(x.flatten(1))
- x = self.out(x)
-
- # Conditioning.
- if self.cmap_dim > 0:
- misc.assert_shape(cmap, [None, self.cmap_dim])
- x = (x * cmap).sum(dim=1, keepdim=True) * \
- (1 / np.sqrt(self.cmap_dim))
-
- assert x.dtype == dtype
- return x
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class Discriminator(torch.nn.Module):
- def __init__(self,
- # Conditioning label (C) dimensionality.
- c_dim,
- img_resolution, # Input resolution.
- # Number of input color channels.
- img_channels,
- # Architecture: 'orig', 'skip', 'resnet'.
- architecture='resnet',
- # Overall multiplier for the number of channels.
- channel_base=32768,
- # Maximum number of channels in any layer.
- channel_max=512,
- # Use FP16 for the N highest resolutions.
- num_fp16_res=0,
- # Clamp the output of convolution layers to +-X, None = disable clamping.
- conv_clamp=None,
- # Dimensionality of mapped conditioning label, None = default.
- cmap_dim=None,
- square=False, # default for rectangle images
- block_kwargs={}, # Arguments for DiscriminatorBlock.
- mapping_kwargs={}, # Arguments for MappingNetwork.
- # Arguments for DiscriminatorEpilogue.
- epilogue_kwargs={},
- ):
- super().__init__()
- self.c_dim = c_dim
- self.img_resolution = img_resolution
- self.img_resolution_log2 = int(np.log2(img_resolution))
- self.img_channels = img_channels
- self.square = square
- self.block_resolutions = [
- 2 ** i for i in range(self.img_resolution_log2, 2, -1)]
- channels_dict = {res: min(channel_base // res, channel_max)
- for res in self.block_resolutions + [4]}
- fp16_resolution = max(
- 2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8)
-
- if cmap_dim is None:
- cmap_dim = channels_dict[4]
- if c_dim == 0:
- cmap_dim = 0
-
- common_kwargs = dict(img_channels=img_channels,
- architecture=architecture, conv_clamp=conv_clamp)
- cur_layer_idx = 0
- for res in self.block_resolutions:
- in_channels = channels_dict[res] if res < img_resolution else 0
- tmp_channels = channels_dict[res]
- out_channels = channels_dict[res // 2]
- use_fp16 = (res >= fp16_resolution)
- block = DiscriminatorBlock(in_channels, tmp_channels, out_channels, resolution=res,
- first_layer_idx=cur_layer_idx, use_fp16=use_fp16, square=square, **block_kwargs, **common_kwargs)
- setattr(self, f'b{res}', block)
- cur_layer_idx += block.num_layers
- if c_dim > 0:
- self.mapping = MappingNetwork(
- z_dim=0, c_dim=c_dim, w_dim=cmap_dim, num_ws=None, w_avg_beta=None, **mapping_kwargs)
- self.b4 = DiscriminatorEpilogue(
- channels_dict[4], cmap_dim=cmap_dim, resolution=4, square=square, **epilogue_kwargs, **common_kwargs)
-
- def forward(self, img, c, **block_kwargs):
- x = None
- for res in self.block_resolutions:
- block = getattr(self, f'b{res}')
- x, img = block(x, img, **block_kwargs)
-
- cmap = None
- if self.c_dim > 0:
- cmap = self.mapping(None, c)
- x = self.b4(x, img, cmap)
- return x
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/Amrrs/openai-whisper-live-transcribe/app.py b/spaces/Amrrs/openai-whisper-live-transcribe/app.py
deleted file mode 100644
index 89966c9fd3ea7fac0d7668d97fda3919b2e676d2..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/openai-whisper-live-transcribe/app.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import whisper
-import gradio as gr
-
-model = whisper.load_model("small")
-
-def transcribe(audio):
-
- #time.sleep(3)
- # load audio and pad/trim it to fit 30 seconds
- audio = whisper.load_audio(audio)
- audio = whisper.pad_or_trim(audio)
-
- # make log-Mel spectrogram and move to the same device as the model
- mel = whisper.log_mel_spectrogram(audio).to(model.device)
-
- # detect the spoken language
- _, probs = model.detect_language(mel)
- print(f"Detected language: {max(probs, key=probs.get)}")
-
- # decode the audio
- options = whisper.DecodingOptions(fp16 = False)
- result = whisper.decode(model, mel, options)
- return result.text
-
-
-
-gr.Interface(
- title = 'OpenAI Whisper ASR Gradio Web UI',
- fn=transcribe,
- inputs=[
- gr.inputs.Audio(source="microphone", type="filepath")
- ],
- outputs=[
- "textbox"
- ],
- live=True).launch()
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/paint_by_example.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/paint_by_example.md
deleted file mode 100644
index ec7172060926649e66e678ed0dcbf04ca8781c0d..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/paint_by_example.md
+++ /dev/null
@@ -1,39 +0,0 @@
-
-
-# PaintByExample
-
-[Paint by Example: Exemplar-based Image Editing with Diffusion Models](https://huggingface.co/papers/2211.13227) is by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen.
-
-The abstract from the paper is:
-
-*Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity.*
-
-The original codebase can be found at [Fantasy-Studio/Paint-by-Example](https://github.com/Fantasy-Studio/Paint-by-Example), and you can try it out in a [demo](https://huggingface.co/spaces/Fantasy-Studio/Paint-by-Example).
-
-## Tips
-
-PaintByExample is supported by the official [Fantasy-Studio/Paint-by-Example](https://huggingface.co/Fantasy-Studio/Paint-by-Example) checkpoint. The checkpoint is warm-started from [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) to inpaint partly masked images conditioned on example and reference images.
-
-
-
-Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
-
-
-
-## PaintByExamplePipeline
-[[autodoc]] PaintByExamplePipeline
- - all
- - __call__
-
-## StableDiffusionPipelineOutput
-[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/unidiffuser/modeling_text_decoder.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/unidiffuser/modeling_text_decoder.py
deleted file mode 100644
index 9b962f6e065621c8fc83775f555bbd732ccc8a26..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/unidiffuser/modeling_text_decoder.py
+++ /dev/null
@@ -1,296 +0,0 @@
-from typing import Optional
-
-import numpy as np
-import torch
-from torch import nn
-from transformers import GPT2Config, GPT2LMHeadModel
-from transformers.modeling_utils import ModuleUtilsMixin
-
-from ...configuration_utils import ConfigMixin, register_to_config
-from ...models import ModelMixin
-
-
-# Modified from ClipCaptionModel in https://github.com/thu-ml/unidiffuser/blob/main/libs/caption_decoder.py
-class UniDiffuserTextDecoder(ModelMixin, ConfigMixin, ModuleUtilsMixin):
- """
- Text decoder model for a image-text [UniDiffuser](https://arxiv.org/pdf/2303.06555.pdf) model. This is used to
- generate text from the UniDiffuser image-text embedding.
-
- Parameters:
- prefix_length (`int`):
- Max number of prefix tokens that will be supplied to the model.
- prefix_inner_dim (`int`):
- The hidden size of the the incoming prefix embeddings. For UniDiffuser, this would be the hidden dim of the
- CLIP text encoder.
- prefix_hidden_dim (`int`, *optional*):
- Hidden dim of the MLP if we encode the prefix.
- vocab_size (`int`, *optional*, defaults to 50257):
- Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by the
- `inputs_ids` passed when calling [`GPT2Model`] or [`TFGPT2Model`].
- n_positions (`int`, *optional*, defaults to 1024):
- The maximum sequence length that this model might ever be used with. Typically set this to something large
- just in case (e.g., 512 or 1024 or 2048).
- n_embd (`int`, *optional*, defaults to 768):
- Dimensionality of the embeddings and hidden states.
- n_layer (`int`, *optional*, defaults to 12):
- Number of hidden layers in the Transformer encoder.
- n_head (`int`, *optional*, defaults to 12):
- Number of attention heads for each attention layer in the Transformer encoder.
- n_inner (`int`, *optional*, defaults to None):
- Dimensionality of the inner feed-forward layers. `None` will set it to 4 times n_embd
- activation_function (`str`, *optional*, defaults to `"gelu"`):
- Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`.
- resid_pdrop (`float`, *optional*, defaults to 0.1):
- The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
- embd_pdrop (`float`, *optional*, defaults to 0.1):
- The dropout ratio for the embeddings.
- attn_pdrop (`float`, *optional*, defaults to 0.1):
- The dropout ratio for the attention.
- layer_norm_epsilon (`float`, *optional*, defaults to 1e-5):
- The epsilon to use in the layer normalization layers.
- initializer_range (`float`, *optional*, defaults to 0.02):
- The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
- scale_attn_weights (`bool`, *optional*, defaults to `True`):
- Scale attention weights by dividing by sqrt(hidden_size)..
- use_cache (`bool`, *optional*, defaults to `True`):
- Whether or not the model should return the last key/values attentions (not used by all models).
- scale_attn_by_inverse_layer_idx (`bool`, *optional*, defaults to `False`):
- Whether to additionally scale attention weights by `1 / layer_idx + 1`.
- reorder_and_upcast_attn (`bool`, *optional*, defaults to `False`):
- Whether to scale keys (K) prior to computing attention (dot-product) and upcast attention
- dot-product/softmax to float() when training with mixed precision.
- """
-
- _keys_to_ignore_on_load_unexpected = [r"h\.\d+\.attn\.bias", r"h\.\d+\.attn\.masked_bias"]
-
- @register_to_config
- def __init__(
- self,
- prefix_length: int,
- prefix_inner_dim: int,
- prefix_hidden_dim: Optional[int] = None,
- vocab_size: int = 50257, # Start of GPT2 config args
- n_positions: int = 1024,
- n_embd: int = 768,
- n_layer: int = 12,
- n_head: int = 12,
- n_inner: Optional[int] = None,
- activation_function: str = "gelu_new",
- resid_pdrop: float = 0.1,
- embd_pdrop: float = 0.1,
- attn_pdrop: float = 0.1,
- layer_norm_epsilon: float = 1e-5,
- initializer_range: float = 0.02,
- scale_attn_weights: bool = True,
- use_cache: bool = True,
- scale_attn_by_inverse_layer_idx: bool = False,
- reorder_and_upcast_attn: bool = False,
- ):
- super().__init__()
-
- self.prefix_length = prefix_length
-
- if prefix_inner_dim != n_embd and prefix_hidden_dim is None:
- raise ValueError(
- f"`prefix_hidden_dim` cannot be `None` when `prefix_inner_dim`: {prefix_hidden_dim} and"
- f" `n_embd`: {n_embd} are not equal."
- )
-
- self.prefix_inner_dim = prefix_inner_dim
- self.prefix_hidden_dim = prefix_hidden_dim
-
- self.encode_prefix = (
- nn.Linear(self.prefix_inner_dim, self.prefix_hidden_dim)
- if self.prefix_hidden_dim is not None
- else nn.Identity()
- )
- self.decode_prefix = (
- nn.Linear(self.prefix_hidden_dim, n_embd) if self.prefix_hidden_dim is not None else nn.Identity()
- )
-
- gpt_config = GPT2Config(
- vocab_size=vocab_size,
- n_positions=n_positions,
- n_embd=n_embd,
- n_layer=n_layer,
- n_head=n_head,
- n_inner=n_inner,
- activation_function=activation_function,
- resid_pdrop=resid_pdrop,
- embd_pdrop=embd_pdrop,
- attn_pdrop=attn_pdrop,
- layer_norm_epsilon=layer_norm_epsilon,
- initializer_range=initializer_range,
- scale_attn_weights=scale_attn_weights,
- use_cache=use_cache,
- scale_attn_by_inverse_layer_idx=scale_attn_by_inverse_layer_idx,
- reorder_and_upcast_attn=reorder_and_upcast_attn,
- )
- self.transformer = GPT2LMHeadModel(gpt_config)
-
- def forward(
- self,
- input_ids: torch.Tensor,
- prefix_embeds: torch.Tensor,
- attention_mask: Optional[torch.Tensor] = None,
- labels: Optional[torch.Tensor] = None,
- ):
- """
- Args:
- input_ids (`torch.Tensor` of shape `(N, max_seq_len)`):
- Text tokens to use for inference.
- prefix_embeds (`torch.Tensor` of shape `(N, prefix_length, 768)`):
- Prefix embedding to preprend to the embedded tokens.
- attention_mask (`torch.Tensor` of shape `(N, prefix_length + max_seq_len, 768)`, *optional*):
- Attention mask for the prefix embedding.
- labels (`torch.Tensor`, *optional*):
- Labels to use for language modeling.
- """
- embedding_text = self.transformer.transformer.wte(input_ids)
- hidden = self.encode_prefix(prefix_embeds)
- prefix_embeds = self.decode_prefix(hidden)
- embedding_cat = torch.cat((prefix_embeds, embedding_text), dim=1)
-
- if labels is not None:
- dummy_token = self.get_dummy_token(input_ids.shape[0], input_ids.device)
- labels = torch.cat((dummy_token, input_ids), dim=1)
- out = self.transformer(inputs_embeds=embedding_cat, labels=labels, attention_mask=attention_mask)
- if self.prefix_hidden_dim is not None:
- return out, hidden
- else:
- return out
-
- def get_dummy_token(self, batch_size: int, device: torch.device) -> torch.Tensor:
- return torch.zeros(batch_size, self.prefix_length, dtype=torch.int64, device=device)
-
- def encode(self, prefix):
- return self.encode_prefix(prefix)
-
- @torch.no_grad()
- def generate_captions(self, features, eos_token_id, device):
- """
- Generate captions given text embedding features. Returns list[L].
-
- Args:
- features (`torch.Tensor` of shape `(B, L, D)`):
- Text embedding features to generate captions from.
- eos_token_id (`int`):
- The token ID of the EOS token for the text decoder model.
- device:
- Device to perform text generation on.
-
- Returns:
- `List[str]`: A list of strings generated from the decoder model.
- """
-
- features = torch.split(features, 1, dim=0)
- generated_tokens = []
- generated_seq_lengths = []
- for feature in features:
- feature = self.decode_prefix(feature.to(device)) # back to the clip feature
- # Only support beam search for now
- output_tokens, seq_lengths = self.generate_beam(
- input_embeds=feature, device=device, eos_token_id=eos_token_id
- )
- generated_tokens.append(output_tokens[0])
- generated_seq_lengths.append(seq_lengths[0])
- generated_tokens = torch.stack(generated_tokens)
- generated_seq_lengths = torch.stack(generated_seq_lengths)
- return generated_tokens, generated_seq_lengths
-
- @torch.no_grad()
- def generate_beam(
- self,
- input_ids=None,
- input_embeds=None,
- device=None,
- beam_size: int = 5,
- entry_length: int = 67,
- temperature: float = 1.0,
- eos_token_id: Optional[int] = None,
- ):
- """
- Generates text using the given tokenizer and text prompt or token embedding via beam search. This
- implementation is based on the beam search implementation from the [original UniDiffuser
- code](https://github.com/thu-ml/unidiffuser/blob/main/libs/caption_decoder.py#L89).
-
- Args:
- eos_token_id (`int`, *optional*):
- The token ID of the EOS token for the text decoder model.
- input_ids (`torch.LongTensor` of shape `(batch_size, input_ids_length)`, *optional*):
- Tokenizer indices of input sequence tokens in the vocabulary. One of `input_ids` and `input_embeds`
- must be supplied.
- input_embeds (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`, *optional*):
- An embedded representation to directly pass to the transformer as a prefix for beam search. One of
- `input_ids` and `input_embeds` must be supplied.
- device:
- The device to perform beam search on.
- beam_size (`int`, *optional*, defaults to `5`):
- The number of best states to store during beam search.
- entry_length (`int`, *optional*, defaults to `67`):
- The number of iterations to run beam search.
- temperature (`float`, *optional*, defaults to 1.0):
- The temperature to use when performing the softmax over logits from the decoding model.
-
- Returns:
- `Tuple(torch.Tensor, torch.Tensor)`: A tuple of tensors where the first element is a tensor of generated
- token sequences sorted by score in descending order, and the second element is the sequence lengths
- corresponding to those sequences.
- """
- # Generates text until stop_token is reached using beam search with the desired beam size.
- stop_token_index = eos_token_id
- tokens = None
- scores = None
- seq_lengths = torch.ones(beam_size, device=device, dtype=torch.int)
- is_stopped = torch.zeros(beam_size, device=device, dtype=torch.bool)
-
- if input_embeds is not None:
- generated = input_embeds
- else:
- generated = self.transformer.transformer.wte(input_ids)
-
- for i in range(entry_length):
- outputs = self.transformer(inputs_embeds=generated)
- logits = outputs.logits
- logits = logits[:, -1, :] / (temperature if temperature > 0 else 1.0)
- logits = logits.softmax(-1).log()
-
- if scores is None:
- scores, next_tokens = logits.topk(beam_size, -1)
- generated = generated.expand(beam_size, *generated.shape[1:])
- next_tokens, scores = next_tokens.permute(1, 0), scores.squeeze(0)
- if tokens is None:
- tokens = next_tokens
- else:
- tokens = tokens.expand(beam_size, *tokens.shape[1:])
- tokens = torch.cat((tokens, next_tokens), dim=1)
- else:
- logits[is_stopped] = -float(np.inf)
- logits[is_stopped, 0] = 0
- scores_sum = scores[:, None] + logits
- seq_lengths[~is_stopped] += 1
- scores_sum_average = scores_sum / seq_lengths[:, None]
- scores_sum_average, next_tokens = scores_sum_average.view(-1).topk(beam_size, -1)
- next_tokens_source = next_tokens // scores_sum.shape[1]
- seq_lengths = seq_lengths[next_tokens_source]
- next_tokens = next_tokens % scores_sum.shape[1]
- next_tokens = next_tokens.unsqueeze(1)
- tokens = tokens[next_tokens_source]
- tokens = torch.cat((tokens, next_tokens), dim=1)
- generated = generated[next_tokens_source]
- scores = scores_sum_average * seq_lengths
- is_stopped = is_stopped[next_tokens_source]
-
- next_token_embed = self.transformer.transformer.wte(next_tokens.squeeze()).view(generated.shape[0], 1, -1)
- generated = torch.cat((generated, next_token_embed), dim=1)
- is_stopped = is_stopped + next_tokens.eq(stop_token_index).squeeze()
- if is_stopped.all():
- break
-
- scores = scores / seq_lengths
- order = scores.argsort(descending=True)
- # tokens tensors are already padded to max_seq_length
- output_texts = [tokens[i] for i in order]
- output_texts = torch.stack(output_texts, dim=0)
- seq_lengths = torch.tensor([seq_lengths[i] for i in order], dtype=seq_lengths.dtype)
- return output_texts, seq_lengths
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/deprecation_utils.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/deprecation_utils.py
deleted file mode 100644
index f482deddd2f46b8d2e29d5229faa0e9a21f2fd98..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/deprecation_utils.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import inspect
-import warnings
-from typing import Any, Dict, Optional, Union
-
-from packaging import version
-
-
-def deprecate(*args, take_from: Optional[Union[Dict, Any]] = None, standard_warn=True, stacklevel=2):
- from .. import __version__
-
- deprecated_kwargs = take_from
- values = ()
- if not isinstance(args[0], tuple):
- args = (args,)
-
- for attribute, version_name, message in args:
- if version.parse(version.parse(__version__).base_version) >= version.parse(version_name):
- raise ValueError(
- f"The deprecation tuple {(attribute, version_name, message)} should be removed since diffusers'"
- f" version {__version__} is >= {version_name}"
- )
-
- warning = None
- if isinstance(deprecated_kwargs, dict) and attribute in deprecated_kwargs:
- values += (deprecated_kwargs.pop(attribute),)
- warning = f"The `{attribute}` argument is deprecated and will be removed in version {version_name}."
- elif hasattr(deprecated_kwargs, attribute):
- values += (getattr(deprecated_kwargs, attribute),)
- warning = f"The `{attribute}` attribute is deprecated and will be removed in version {version_name}."
- elif deprecated_kwargs is None:
- warning = f"`{attribute}` is deprecated and will be removed in version {version_name}."
-
- if warning is not None:
- warning = warning + " " if standard_warn else ""
- warnings.warn(warning + message, FutureWarning, stacklevel=stacklevel)
-
- if isinstance(deprecated_kwargs, dict) and len(deprecated_kwargs) > 0:
- call_frame = inspect.getouterframes(inspect.currentframe())[1]
- filename = call_frame.filename
- line_number = call_frame.lineno
- function = call_frame.function
- key, value = next(iter(deprecated_kwargs.items()))
- raise TypeError(f"{function} in {filename} line {line_number-1} got an unexpected keyword argument `{key}`")
-
- if len(values) == 0:
- return
- elif len(values) == 1:
- return values[0]
- return values
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_lora_layers.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_lora_layers.py
deleted file mode 100644
index 000748312fca1053a22f2178275b52a5dce310fe..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_lora_layers.py
+++ /dev/null
@@ -1,841 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import os
-import tempfile
-import unittest
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from huggingface_hub.repocard import RepoCard
-from transformers import CLIPTextConfig, CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
-
-from diffusers import (
- AutoencoderKL,
- DDIMScheduler,
- EulerDiscreteScheduler,
- StableDiffusionPipeline,
- StableDiffusionXLPipeline,
- UNet2DConditionModel,
-)
-from diffusers.loaders import AttnProcsLayers, LoraLoaderMixin, PatchedLoraProjection, text_encoder_attn_modules
-from diffusers.models.attention_processor import (
- Attention,
- AttnProcessor,
- AttnProcessor2_0,
- LoRAAttnProcessor,
- LoRAAttnProcessor2_0,
- LoRAXFormersAttnProcessor,
- XFormersAttnProcessor,
-)
-from diffusers.utils import floats_tensor, torch_device
-from diffusers.utils.testing_utils import require_torch_gpu, slow
-
-
-def create_unet_lora_layers(unet: nn.Module):
- lora_attn_procs = {}
- for name in unet.attn_processors.keys():
- cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim
- if name.startswith("mid_block"):
- hidden_size = unet.config.block_out_channels[-1]
- elif name.startswith("up_blocks"):
- block_id = int(name[len("up_blocks.")])
- hidden_size = list(reversed(unet.config.block_out_channels))[block_id]
- elif name.startswith("down_blocks"):
- block_id = int(name[len("down_blocks.")])
- hidden_size = unet.config.block_out_channels[block_id]
- lora_attn_processor_class = (
- LoRAAttnProcessor2_0 if hasattr(F, "scaled_dot_product_attention") else LoRAAttnProcessor
- )
- lora_attn_procs[name] = lora_attn_processor_class(
- hidden_size=hidden_size, cross_attention_dim=cross_attention_dim
- )
- unet_lora_layers = AttnProcsLayers(lora_attn_procs)
- return lora_attn_procs, unet_lora_layers
-
-
-def create_text_encoder_lora_attn_procs(text_encoder: nn.Module):
- text_lora_attn_procs = {}
- lora_attn_processor_class = (
- LoRAAttnProcessor2_0 if hasattr(F, "scaled_dot_product_attention") else LoRAAttnProcessor
- )
- for name, module in text_encoder_attn_modules(text_encoder):
- if isinstance(module.out_proj, nn.Linear):
- out_features = module.out_proj.out_features
- elif isinstance(module.out_proj, PatchedLoraProjection):
- out_features = module.out_proj.regular_linear_layer.out_features
- else:
- assert False, module.out_proj.__class__
-
- text_lora_attn_procs[name] = lora_attn_processor_class(hidden_size=out_features, cross_attention_dim=None)
- return text_lora_attn_procs
-
-
-def create_text_encoder_lora_layers(text_encoder: nn.Module):
- text_lora_attn_procs = create_text_encoder_lora_attn_procs(text_encoder)
- text_encoder_lora_layers = AttnProcsLayers(text_lora_attn_procs)
- return text_encoder_lora_layers
-
-
-def set_lora_weights(lora_attn_parameters, randn_weight=False):
- with torch.no_grad():
- for parameter in lora_attn_parameters:
- if randn_weight:
- parameter[:] = torch.randn_like(parameter)
- else:
- torch.zero_(parameter)
-
-
-class LoraLoaderMixinTests(unittest.TestCase):
- def get_dummy_components(self):
- torch.manual_seed(0)
- unet = UNet2DConditionModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- sample_size=32,
- in_channels=4,
- out_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
- cross_attention_dim=32,
- )
- scheduler = DDIMScheduler(
- beta_start=0.00085,
- beta_end=0.012,
- beta_schedule="scaled_linear",
- clip_sample=False,
- set_alpha_to_one=False,
- steps_offset=1,
- )
- torch.manual_seed(0)
- vae = AutoencoderKL(
- block_out_channels=[32, 64],
- in_channels=3,
- out_channels=3,
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
- latent_channels=4,
- )
- text_encoder_config = CLIPTextConfig(
- bos_token_id=0,
- eos_token_id=2,
- hidden_size=32,
- intermediate_size=37,
- layer_norm_eps=1e-05,
- num_attention_heads=4,
- num_hidden_layers=5,
- pad_token_id=1,
- vocab_size=1000,
- )
- text_encoder = CLIPTextModel(text_encoder_config)
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- unet_lora_attn_procs, unet_lora_layers = create_unet_lora_layers(unet)
- text_encoder_lora_layers = create_text_encoder_lora_layers(text_encoder)
-
- pipeline_components = {
- "unet": unet,
- "scheduler": scheduler,
- "vae": vae,
- "text_encoder": text_encoder,
- "tokenizer": tokenizer,
- "safety_checker": None,
- "feature_extractor": None,
- }
- lora_components = {
- "unet_lora_layers": unet_lora_layers,
- "text_encoder_lora_layers": text_encoder_lora_layers,
- "unet_lora_attn_procs": unet_lora_attn_procs,
- }
- return pipeline_components, lora_components
-
- def get_dummy_inputs(self, with_generator=True):
- batch_size = 1
- sequence_length = 10
- num_channels = 4
- sizes = (32, 32)
-
- generator = torch.manual_seed(0)
- noise = floats_tensor((batch_size, num_channels) + sizes)
- input_ids = torch.randint(1, sequence_length, size=(batch_size, sequence_length), generator=generator)
-
- pipeline_inputs = {
- "prompt": "A painting of a squirrel eating a burger",
- "num_inference_steps": 2,
- "guidance_scale": 6.0,
- "output_type": "np",
- }
- if with_generator:
- pipeline_inputs.update({"generator": generator})
-
- return noise, input_ids, pipeline_inputs
-
- # copied from: https://colab.research.google.com/gist/sayakpaul/df2ef6e1ae6d8c10a49d859883b10860/scratchpad.ipynb
- def get_dummy_tokens(self):
- max_seq_length = 77
-
- inputs = torch.randint(2, 56, size=(1, max_seq_length), generator=torch.manual_seed(0))
-
- prepared_inputs = {}
- prepared_inputs["input_ids"] = inputs
- return prepared_inputs
-
- def create_lora_weight_file(self, tmpdirname):
- _, lora_components = self.get_dummy_components()
- LoraLoaderMixin.save_lora_weights(
- save_directory=tmpdirname,
- unet_lora_layers=lora_components["unet_lora_layers"],
- text_encoder_lora_layers=lora_components["text_encoder_lora_layers"],
- )
- self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin")))
-
- def test_lora_save_load(self):
- pipeline_components, lora_components = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**pipeline_components)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- _, _, pipeline_inputs = self.get_dummy_inputs()
-
- original_images = sd_pipe(**pipeline_inputs).images
- orig_image_slice = original_images[0, -3:, -3:, -1]
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- LoraLoaderMixin.save_lora_weights(
- save_directory=tmpdirname,
- unet_lora_layers=lora_components["unet_lora_layers"],
- text_encoder_lora_layers=lora_components["text_encoder_lora_layers"],
- )
- self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin")))
- sd_pipe.load_lora_weights(tmpdirname)
-
- lora_images = sd_pipe(**pipeline_inputs).images
- lora_image_slice = lora_images[0, -3:, -3:, -1]
-
- # Outputs shouldn't match.
- self.assertFalse(torch.allclose(torch.from_numpy(orig_image_slice), torch.from_numpy(lora_image_slice)))
-
- def test_lora_save_load_safetensors(self):
- pipeline_components, lora_components = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**pipeline_components)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- _, _, pipeline_inputs = self.get_dummy_inputs()
-
- original_images = sd_pipe(**pipeline_inputs).images
- orig_image_slice = original_images[0, -3:, -3:, -1]
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- LoraLoaderMixin.save_lora_weights(
- save_directory=tmpdirname,
- unet_lora_layers=lora_components["unet_lora_layers"],
- text_encoder_lora_layers=lora_components["text_encoder_lora_layers"],
- safe_serialization=True,
- )
- self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.safetensors")))
- sd_pipe.load_lora_weights(tmpdirname)
-
- lora_images = sd_pipe(**pipeline_inputs).images
- lora_image_slice = lora_images[0, -3:, -3:, -1]
-
- # Outputs shouldn't match.
- self.assertFalse(torch.allclose(torch.from_numpy(orig_image_slice), torch.from_numpy(lora_image_slice)))
-
- def test_lora_save_load_legacy(self):
- pipeline_components, lora_components = self.get_dummy_components()
- unet_lora_attn_procs = lora_components["unet_lora_attn_procs"]
- sd_pipe = StableDiffusionPipeline(**pipeline_components)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- _, _, pipeline_inputs = self.get_dummy_inputs()
-
- original_images = sd_pipe(**pipeline_inputs).images
- orig_image_slice = original_images[0, -3:, -3:, -1]
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- unet = sd_pipe.unet
- unet.set_attn_processor(unet_lora_attn_procs)
- unet.save_attn_procs(tmpdirname)
- self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin")))
- sd_pipe.load_lora_weights(tmpdirname)
-
- lora_images = sd_pipe(**pipeline_inputs).images
- lora_image_slice = lora_images[0, -3:, -3:, -1]
-
- # Outputs shouldn't match.
- self.assertFalse(torch.allclose(torch.from_numpy(orig_image_slice), torch.from_numpy(lora_image_slice)))
-
- def test_text_encoder_lora_monkey_patch(self):
- pipeline_components, _ = self.get_dummy_components()
- pipe = StableDiffusionPipeline(**pipeline_components)
-
- dummy_tokens = self.get_dummy_tokens()
-
- # inference without lora
- outputs_without_lora = pipe.text_encoder(**dummy_tokens)[0]
- assert outputs_without_lora.shape == (1, 77, 32)
-
- # monkey patch
- params = pipe._modify_text_encoder(pipe.text_encoder, pipe.lora_scale)
-
- set_lora_weights(params, randn_weight=False)
-
- # inference with lora
- outputs_with_lora = pipe.text_encoder(**dummy_tokens)[0]
- assert outputs_with_lora.shape == (1, 77, 32)
-
- assert torch.allclose(
- outputs_without_lora, outputs_with_lora
- ), "lora_up_weight are all zero, so the lora outputs should be the same to without lora outputs"
-
- # create lora_attn_procs with randn up.weights
- create_text_encoder_lora_attn_procs(pipe.text_encoder)
-
- # monkey patch
- params = pipe._modify_text_encoder(pipe.text_encoder, pipe.lora_scale)
-
- set_lora_weights(params, randn_weight=True)
-
- # inference with lora
- outputs_with_lora = pipe.text_encoder(**dummy_tokens)[0]
- assert outputs_with_lora.shape == (1, 77, 32)
-
- assert not torch.allclose(
- outputs_without_lora, outputs_with_lora
- ), "lora_up_weight are not zero, so the lora outputs should be different to without lora outputs"
-
- def test_text_encoder_lora_remove_monkey_patch(self):
- pipeline_components, _ = self.get_dummy_components()
- pipe = StableDiffusionPipeline(**pipeline_components)
-
- dummy_tokens = self.get_dummy_tokens()
-
- # inference without lora
- outputs_without_lora = pipe.text_encoder(**dummy_tokens)[0]
- assert outputs_without_lora.shape == (1, 77, 32)
-
- # monkey patch
- params = pipe._modify_text_encoder(pipe.text_encoder, pipe.lora_scale)
-
- set_lora_weights(params, randn_weight=True)
-
- # inference with lora
- outputs_with_lora = pipe.text_encoder(**dummy_tokens)[0]
- assert outputs_with_lora.shape == (1, 77, 32)
-
- assert not torch.allclose(
- outputs_without_lora, outputs_with_lora
- ), "lora outputs should be different to without lora outputs"
-
- # remove monkey patch
- pipe._remove_text_encoder_monkey_patch()
-
- # inference with removed lora
- outputs_without_lora_removed = pipe.text_encoder(**dummy_tokens)[0]
- assert outputs_without_lora_removed.shape == (1, 77, 32)
-
- assert torch.allclose(
- outputs_without_lora, outputs_without_lora_removed
- ), "remove lora monkey patch should restore the original outputs"
-
- def test_text_encoder_lora_scale(self):
- pipeline_components, lora_components = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**pipeline_components)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- _, _, pipeline_inputs = self.get_dummy_inputs()
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- LoraLoaderMixin.save_lora_weights(
- save_directory=tmpdirname,
- unet_lora_layers=lora_components["unet_lora_layers"],
- text_encoder_lora_layers=lora_components["text_encoder_lora_layers"],
- )
- self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin")))
- sd_pipe.load_lora_weights(tmpdirname)
-
- lora_images = sd_pipe(**pipeline_inputs).images
- lora_image_slice = lora_images[0, -3:, -3:, -1]
-
- lora_images_with_scale = sd_pipe(**pipeline_inputs, cross_attention_kwargs={"scale": 0.5}).images
- lora_image_with_scale_slice = lora_images_with_scale[0, -3:, -3:, -1]
-
- # Outputs shouldn't match.
- self.assertFalse(
- torch.allclose(torch.from_numpy(lora_image_slice), torch.from_numpy(lora_image_with_scale_slice))
- )
-
- def test_lora_unet_attn_processors(self):
- with tempfile.TemporaryDirectory() as tmpdirname:
- self.create_lora_weight_file(tmpdirname)
-
- pipeline_components, _ = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**pipeline_components)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- # check if vanilla attention processors are used
- for _, module in sd_pipe.unet.named_modules():
- if isinstance(module, Attention):
- self.assertIsInstance(module.processor, (AttnProcessor, AttnProcessor2_0))
-
- # load LoRA weight file
- sd_pipe.load_lora_weights(tmpdirname)
-
- # check if lora attention processors are used
- for _, module in sd_pipe.unet.named_modules():
- if isinstance(module, Attention):
- attn_proc_class = (
- LoRAAttnProcessor2_0 if hasattr(F, "scaled_dot_product_attention") else LoRAAttnProcessor
- )
- self.assertIsInstance(module.processor, attn_proc_class)
-
- def test_unload_lora_sd(self):
- pipeline_components, lora_components = self.get_dummy_components()
- _, _, pipeline_inputs = self.get_dummy_inputs(with_generator=False)
- sd_pipe = StableDiffusionPipeline(**pipeline_components)
-
- original_images = sd_pipe(**pipeline_inputs, generator=torch.manual_seed(0)).images
- orig_image_slice = original_images[0, -3:, -3:, -1]
-
- # Emulate training.
- set_lora_weights(lora_components["unet_lora_layers"].parameters(), randn_weight=True)
- set_lora_weights(lora_components["text_encoder_lora_layers"].parameters(), randn_weight=True)
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- LoraLoaderMixin.save_lora_weights(
- save_directory=tmpdirname,
- unet_lora_layers=lora_components["unet_lora_layers"],
- text_encoder_lora_layers=lora_components["text_encoder_lora_layers"],
- )
- self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin")))
- sd_pipe.load_lora_weights(tmpdirname)
-
- lora_images = sd_pipe(**pipeline_inputs, generator=torch.manual_seed(0)).images
- lora_image_slice = lora_images[0, -3:, -3:, -1]
-
- # Unload LoRA parameters.
- sd_pipe.unload_lora_weights()
- original_images_two = sd_pipe(**pipeline_inputs, generator=torch.manual_seed(0)).images
- orig_image_slice_two = original_images_two[0, -3:, -3:, -1]
-
- assert not np.allclose(
- orig_image_slice, lora_image_slice
- ), "LoRA parameters should lead to a different image slice."
- assert not np.allclose(
- orig_image_slice_two, lora_image_slice
- ), "LoRA parameters should lead to a different image slice."
- assert np.allclose(
- orig_image_slice, orig_image_slice_two, atol=1e-3
- ), "Unloading LoRA parameters should lead to results similar to what was obtained with the pipeline without any LoRA parameters."
-
- @unittest.skipIf(torch_device != "cuda", "This test is supposed to run on GPU")
- def test_lora_unet_attn_processors_with_xformers(self):
- with tempfile.TemporaryDirectory() as tmpdirname:
- self.create_lora_weight_file(tmpdirname)
-
- pipeline_components, _ = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**pipeline_components)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- # enable XFormers
- sd_pipe.enable_xformers_memory_efficient_attention()
-
- # check if xFormers attention processors are used
- for _, module in sd_pipe.unet.named_modules():
- if isinstance(module, Attention):
- self.assertIsInstance(module.processor, XFormersAttnProcessor)
-
- # load LoRA weight file
- sd_pipe.load_lora_weights(tmpdirname)
-
- # check if lora attention processors are used
- for _, module in sd_pipe.unet.named_modules():
- if isinstance(module, Attention):
- self.assertIsInstance(module.processor, LoRAXFormersAttnProcessor)
-
- # unload lora weights
- sd_pipe.unload_lora_weights()
-
- # check if attention processors are reverted back to xFormers
- for _, module in sd_pipe.unet.named_modules():
- if isinstance(module, Attention):
- self.assertIsInstance(module.processor, XFormersAttnProcessor)
-
- @unittest.skipIf(torch_device != "cuda", "This test is supposed to run on GPU")
- def test_lora_save_load_with_xformers(self):
- pipeline_components, lora_components = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**pipeline_components)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- _, _, pipeline_inputs = self.get_dummy_inputs()
-
- # enable XFormers
- sd_pipe.enable_xformers_memory_efficient_attention()
-
- original_images = sd_pipe(**pipeline_inputs).images
- orig_image_slice = original_images[0, -3:, -3:, -1]
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- LoraLoaderMixin.save_lora_weights(
- save_directory=tmpdirname,
- unet_lora_layers=lora_components["unet_lora_layers"],
- text_encoder_lora_layers=lora_components["text_encoder_lora_layers"],
- )
- self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin")))
- sd_pipe.load_lora_weights(tmpdirname)
-
- lora_images = sd_pipe(**pipeline_inputs).images
- lora_image_slice = lora_images[0, -3:, -3:, -1]
-
- # Outputs shouldn't match.
- self.assertFalse(torch.allclose(torch.from_numpy(orig_image_slice), torch.from_numpy(lora_image_slice)))
-
-
-class SDXLLoraLoaderMixinTests(unittest.TestCase):
- def get_dummy_components(self):
- torch.manual_seed(0)
- unet = UNet2DConditionModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- sample_size=32,
- in_channels=4,
- out_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
- # SD2-specific config below
- attention_head_dim=(2, 4),
- use_linear_projection=True,
- addition_embed_type="text_time",
- addition_time_embed_dim=8,
- transformer_layers_per_block=(1, 2),
- projection_class_embeddings_input_dim=80, # 6 * 8 + 32
- cross_attention_dim=64,
- )
- scheduler = EulerDiscreteScheduler(
- beta_start=0.00085,
- beta_end=0.012,
- steps_offset=1,
- beta_schedule="scaled_linear",
- timestep_spacing="leading",
- )
- torch.manual_seed(0)
- vae = AutoencoderKL(
- block_out_channels=[32, 64],
- in_channels=3,
- out_channels=3,
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
- latent_channels=4,
- sample_size=128,
- )
- torch.manual_seed(0)
- text_encoder_config = CLIPTextConfig(
- bos_token_id=0,
- eos_token_id=2,
- hidden_size=32,
- intermediate_size=37,
- layer_norm_eps=1e-05,
- num_attention_heads=4,
- num_hidden_layers=5,
- pad_token_id=1,
- vocab_size=1000,
- # SD2-specific config below
- hidden_act="gelu",
- projection_dim=32,
- )
- text_encoder = CLIPTextModel(text_encoder_config)
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- text_encoder_2 = CLIPTextModelWithProjection(text_encoder_config)
- tokenizer_2 = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- unet_lora_attn_procs, unet_lora_layers = create_unet_lora_layers(unet)
- text_encoder_one_lora_layers = create_text_encoder_lora_layers(text_encoder)
- text_encoder_two_lora_layers = create_text_encoder_lora_layers(text_encoder_2)
-
- pipeline_components = {
- "unet": unet,
- "scheduler": scheduler,
- "vae": vae,
- "text_encoder": text_encoder,
- "text_encoder_2": text_encoder_2,
- "tokenizer": tokenizer,
- "tokenizer_2": tokenizer_2,
- }
- lora_components = {
- "unet_lora_layers": unet_lora_layers,
- "text_encoder_one_lora_layers": text_encoder_one_lora_layers,
- "text_encoder_two_lora_layers": text_encoder_two_lora_layers,
- "unet_lora_attn_procs": unet_lora_attn_procs,
- }
- return pipeline_components, lora_components
-
- def get_dummy_inputs(self, with_generator=True):
- batch_size = 1
- sequence_length = 10
- num_channels = 4
- sizes = (32, 32)
-
- generator = torch.manual_seed(0)
- noise = floats_tensor((batch_size, num_channels) + sizes)
- input_ids = torch.randint(1, sequence_length, size=(batch_size, sequence_length), generator=generator)
-
- pipeline_inputs = {
- "prompt": "A painting of a squirrel eating a burger",
- "num_inference_steps": 2,
- "guidance_scale": 6.0,
- "output_type": "np",
- }
- if with_generator:
- pipeline_inputs.update({"generator": generator})
-
- return noise, input_ids, pipeline_inputs
-
- def test_lora_save_load(self):
- pipeline_components, lora_components = self.get_dummy_components()
- sd_pipe = StableDiffusionXLPipeline(**pipeline_components)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- _, _, pipeline_inputs = self.get_dummy_inputs()
-
- original_images = sd_pipe(**pipeline_inputs).images
- orig_image_slice = original_images[0, -3:, -3:, -1]
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- StableDiffusionXLPipeline.save_lora_weights(
- save_directory=tmpdirname,
- unet_lora_layers=lora_components["unet_lora_layers"],
- text_encoder_lora_layers=lora_components["text_encoder_one_lora_layers"],
- text_encoder_2_lora_layers=lora_components["text_encoder_two_lora_layers"],
- )
- self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin")))
- sd_pipe.load_lora_weights(tmpdirname)
-
- lora_images = sd_pipe(**pipeline_inputs).images
- lora_image_slice = lora_images[0, -3:, -3:, -1]
-
- # Outputs shouldn't match.
- self.assertFalse(torch.allclose(torch.from_numpy(orig_image_slice), torch.from_numpy(lora_image_slice)))
-
- def test_unload_lora_sdxl(self):
- pipeline_components, lora_components = self.get_dummy_components()
- _, _, pipeline_inputs = self.get_dummy_inputs(with_generator=False)
- sd_pipe = StableDiffusionXLPipeline(**pipeline_components)
-
- original_images = sd_pipe(**pipeline_inputs, generator=torch.manual_seed(0)).images
- orig_image_slice = original_images[0, -3:, -3:, -1]
-
- # Emulate training.
- set_lora_weights(lora_components["unet_lora_layers"].parameters(), randn_weight=True)
- set_lora_weights(lora_components["text_encoder_one_lora_layers"].parameters(), randn_weight=True)
- set_lora_weights(lora_components["text_encoder_two_lora_layers"].parameters(), randn_weight=True)
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- StableDiffusionXLPipeline.save_lora_weights(
- save_directory=tmpdirname,
- unet_lora_layers=lora_components["unet_lora_layers"],
- text_encoder_lora_layers=lora_components["text_encoder_one_lora_layers"],
- text_encoder_2_lora_layers=lora_components["text_encoder_two_lora_layers"],
- )
- self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin")))
- sd_pipe.load_lora_weights(tmpdirname)
-
- lora_images = sd_pipe(**pipeline_inputs, generator=torch.manual_seed(0)).images
- lora_image_slice = lora_images[0, -3:, -3:, -1]
-
- # Unload LoRA parameters.
- sd_pipe.unload_lora_weights()
- original_images_two = sd_pipe(**pipeline_inputs, generator=torch.manual_seed(0)).images
- orig_image_slice_two = original_images_two[0, -3:, -3:, -1]
-
- assert not np.allclose(
- orig_image_slice, lora_image_slice
- ), "LoRA parameters should lead to a different image slice."
- assert not np.allclose(
- orig_image_slice_two, lora_image_slice
- ), "LoRA parameters should lead to a different image slice."
- assert np.allclose(
- orig_image_slice, orig_image_slice_two, atol=1e-3
- ), "Unloading LoRA parameters should lead to results similar to what was obtained with the pipeline without any LoRA parameters."
-
-
-@slow
-@require_torch_gpu
-class LoraIntegrationTests(unittest.TestCase):
- def test_dreambooth_old_format(self):
- generator = torch.Generator("cpu").manual_seed(0)
-
- lora_model_id = "hf-internal-testing/lora_dreambooth_dog_example"
- card = RepoCard.load(lora_model_id)
- base_model_id = card.data.to_dict()["base_model"]
-
- pipe = StableDiffusionPipeline.from_pretrained(base_model_id, safety_checker=None)
- pipe = pipe.to(torch_device)
- pipe.load_lora_weights(lora_model_id)
-
- images = pipe(
- "A photo of a sks dog floating in the river", output_type="np", generator=generator, num_inference_steps=2
- ).images
-
- images = images[0, -3:, -3:, -1].flatten()
-
- expected = np.array([0.7207, 0.6787, 0.6010, 0.7478, 0.6838, 0.6064, 0.6984, 0.6443, 0.5785])
-
- self.assertTrue(np.allclose(images, expected, atol=1e-4))
-
- def test_dreambooth_text_encoder_new_format(self):
- generator = torch.Generator().manual_seed(0)
-
- lora_model_id = "hf-internal-testing/lora-trained"
- card = RepoCard.load(lora_model_id)
- base_model_id = card.data.to_dict()["base_model"]
-
- pipe = StableDiffusionPipeline.from_pretrained(base_model_id, safety_checker=None)
- pipe = pipe.to(torch_device)
- pipe.load_lora_weights(lora_model_id)
-
- images = pipe("A photo of a sks dog", output_type="np", generator=generator, num_inference_steps=2).images
-
- images = images[0, -3:, -3:, -1].flatten()
-
- expected = np.array([0.6628, 0.6138, 0.5390, 0.6625, 0.6130, 0.5463, 0.6166, 0.5788, 0.5359])
-
- self.assertTrue(np.allclose(images, expected, atol=1e-4))
-
- def test_a1111(self):
- generator = torch.Generator().manual_seed(0)
-
- pipe = StableDiffusionPipeline.from_pretrained("hf-internal-testing/Counterfeit-V2.5", safety_checker=None).to(
- torch_device
- )
- lora_model_id = "hf-internal-testing/civitai-light-shadow-lora"
- lora_filename = "light_and_shadow.safetensors"
- pipe.load_lora_weights(lora_model_id, weight_name=lora_filename)
-
- images = pipe(
- "masterpiece, best quality, mountain", output_type="np", generator=generator, num_inference_steps=2
- ).images
-
- images = images[0, -3:, -3:, -1].flatten()
- expected = np.array([0.3725, 0.3767, 0.3761, 0.3796, 0.3827, 0.3763, 0.3831, 0.3809, 0.3392])
-
- self.assertTrue(np.allclose(images, expected, atol=1e-4))
-
- def test_vanilla_funetuning(self):
- generator = torch.Generator().manual_seed(0)
-
- lora_model_id = "hf-internal-testing/sd-model-finetuned-lora-t4"
- card = RepoCard.load(lora_model_id)
- base_model_id = card.data.to_dict()["base_model"]
-
- pipe = StableDiffusionPipeline.from_pretrained(base_model_id, safety_checker=None)
- pipe = pipe.to(torch_device)
- pipe.load_lora_weights(lora_model_id)
-
- images = pipe("A pokemon with blue eyes.", output_type="np", generator=generator, num_inference_steps=2).images
-
- images = images[0, -3:, -3:, -1].flatten()
-
- expected = np.array([0.7406, 0.699, 0.5963, 0.7493, 0.7045, 0.6096, 0.6886, 0.6388, 0.583])
-
- self.assertTrue(np.allclose(images, expected, atol=1e-4))
-
- def test_unload_lora(self):
- generator = torch.manual_seed(0)
- prompt = "masterpiece, best quality, mountain"
- num_inference_steps = 2
-
- pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", safety_checker=None).to(
- torch_device
- )
- initial_images = pipe(
- prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps
- ).images
- initial_images = initial_images[0, -3:, -3:, -1].flatten()
-
- lora_model_id = "hf-internal-testing/civitai-colored-icons-lora"
- lora_filename = "Colored_Icons_by_vizsumit.safetensors"
-
- pipe.load_lora_weights(lora_model_id, weight_name=lora_filename)
- generator = torch.manual_seed(0)
- lora_images = pipe(
- prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps
- ).images
- lora_images = lora_images[0, -3:, -3:, -1].flatten()
-
- pipe.unload_lora_weights()
- generator = torch.manual_seed(0)
- unloaded_lora_images = pipe(
- prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps
- ).images
- unloaded_lora_images = unloaded_lora_images[0, -3:, -3:, -1].flatten()
-
- self.assertFalse(np.allclose(initial_images, lora_images))
- self.assertTrue(np.allclose(initial_images, unloaded_lora_images, atol=1e-3))
-
- def test_load_unload_load_kohya_lora(self):
- # This test ensures that a Kohya-style LoRA can be safely unloaded and then loaded
- # without introducing any side-effects. Even though the test uses a Kohya-style
- # LoRA, the underlying adapter handling mechanism is format-agnostic.
- generator = torch.manual_seed(0)
- prompt = "masterpiece, best quality, mountain"
- num_inference_steps = 2
-
- pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", safety_checker=None).to(
- torch_device
- )
- initial_images = pipe(
- prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps
- ).images
- initial_images = initial_images[0, -3:, -3:, -1].flatten()
-
- lora_model_id = "hf-internal-testing/civitai-colored-icons-lora"
- lora_filename = "Colored_Icons_by_vizsumit.safetensors"
-
- pipe.load_lora_weights(lora_model_id, weight_name=lora_filename)
- generator = torch.manual_seed(0)
- lora_images = pipe(
- prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps
- ).images
- lora_images = lora_images[0, -3:, -3:, -1].flatten()
-
- pipe.unload_lora_weights()
- generator = torch.manual_seed(0)
- unloaded_lora_images = pipe(
- prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps
- ).images
- unloaded_lora_images = unloaded_lora_images[0, -3:, -3:, -1].flatten()
-
- self.assertFalse(np.allclose(initial_images, lora_images))
- self.assertTrue(np.allclose(initial_images, unloaded_lora_images, atol=1e-3))
-
- # make sure we can load a LoRA again after unloading and they don't have
- # any undesired effects.
- pipe.load_lora_weights(lora_model_id, weight_name=lora_filename)
- generator = torch.manual_seed(0)
- lora_images_again = pipe(
- prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps
- ).images
- lora_images_again = lora_images_again[0, -3:, -3:, -1].flatten()
-
- self.assertTrue(np.allclose(lora_images, lora_images_again, atol=1e-3))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gcnet/README.md b/spaces/Andy1621/uniformer_image_detection/configs/gcnet/README.md
deleted file mode 100644
index 0ef8db737743c63fbf2089e53d8f5302b52ee5e6..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/gcnet/README.md
+++ /dev/null
@@ -1,59 +0,0 @@
-# GCNet for Object Detection
-
-By [Yue Cao](http://yue-cao.me), [Jiarui Xu](http://jerryxu.net), [Stephen Lin](https://scholar.google.com/citations?user=c3PYmxUAAAAJ&hl=en), Fangyun Wei, [Han Hu](https://sites.google.com/site/hanhushomepage/).
-
-We provide config files to reproduce the results in the paper for
-["GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond"](https://arxiv.org/abs/1904.11492) on COCO object detection.
-
-## Introduction
-
-[ALGORITHM]
-
-**GCNet** is initially described in [arxiv](https://arxiv.org/abs/1904.11492). Via absorbing advantages of Non-Local Networks (NLNet) and Squeeze-Excitation Networks (SENet), GCNet provides a simple, fast and effective approach for global context modeling, which generally outperforms both NLNet and SENet on major benchmarks for various recognition tasks.
-
-## Citing GCNet
-
-```latex
-@article{cao2019GCNet,
- title={GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond},
- author={Cao, Yue and Xu, Jiarui and Lin, Stephen and Wei, Fangyun and Hu, Han},
- journal={arXiv preprint arXiv:1904.11492},
- year={2019}
-}
-```
-
-## Results and models
-
-The results on COCO 2017val are shown in the below table.
-
-| Backbone | Model | Context | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
-| :-------: | :--------------: | :------------: | :-----: | :------: | :------------: | :----: | :-----: | :------: | :--------: |
-| R-50-FPN | Mask | GC(c3-c5, r16) | 1x | 5.0 | | 39.7 | 35.9 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r50_fpn_r16_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_r16_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_r16_gcb_c3-c5_1x_coco_20200515_211915-187da160.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_r16_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_r16_gcb_c3-c5_1x_coco_20200515_211915.log.json) |
-| R-50-FPN | Mask | GC(c3-c5, r4) | 1x | 5.1 | 15.0 | 39.9 | 36.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r50_fpn_r4_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_r4_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_r4_gcb_c3-c5_1x_coco_20200204-17235656.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_r4_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_r4_gcb_c3-c5_1x_coco_20200204_024626.log.json) |
-| R-101-FPN | Mask | GC(c3-c5, r16) | 1x | 7.6 | 11.4 | 41.3 | 37.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco_20200205-e58ae947.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco_20200205_192835.log.json) |
-| R-101-FPN | Mask | GC(c3-c5, r4) | 1x | 7.8 | 11.6 | 42.2 | 37.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r101_fpn_r4_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_r4_gcb_c3-c5_1x_coco/mask_rcnn_r101_fpn_r4_gcb_c3-c5_1x_coco_20200206-af22dc9d.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_r4_gcb_c3-c5_1x_coco/mask_rcnn_r101_fpn_r4_gcb_c3-c5_1x_coco_20200206_112128.log.json) |
-
-| Backbone | Model | Context | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
-| :-------: | :--------------: | :------------: | :-----: | :------: | :------------: | :----: | :-----: | :------: | :-------: |
-| R-50-FPN | Mask | - | 1x | 4.4 | 16.6 | 38.4 | 34.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco_20200202-bb3eb55c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco_20200202_214122.log.json) |
-| R-50-FPN | Mask | GC(c3-c5, r16) | 1x | 5.0 | 15.5 | 40.4 | 36.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200202-587b99aa.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200202_174907.log.json) |
-| R-50-FPN | Mask | GC(c3-c5, r4) | 1x | 5.1 | 15.1 | 40.7 | 36.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco_20200202-50b90e5c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco_20200202_085547.log.json) |
-| R-101-FPN | Mask | - | 1x | 6.4 | 13.3 | 40.5 | 36.3 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco_20200210-81658c8a.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco_20200210_220422.log.json) |
-| R-101-FPN | Mask | GC(c3-c5, r16) | 1x | 7.6 | 12.0 | 42.2 | 37.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/mask_rcnn_r101_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200207-945e77ca.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/mask_rcnn_r101_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200207_015330.log.json) |
-| R-101-FPN | Mask | GC(c3-c5, r4) | 1x | 7.8 | 11.8 | 42.2 | 37.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco_20200206-8407a3f0.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco_20200206_142508.log.json) |
-| X-101-FPN | Mask | - | 1x | 7.6 | 11.3 | 42.4 | 37.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco_20200211-7584841c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco_20200211_054326.log.json) |
-| X-101-FPN | Mask | GC(c3-c5, r16) | 1x | 8.8 | 9.8 | 43.5 | 38.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200211-cbed3d2c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200211_164715.log.json) |
-| X-101-FPN | Mask | GC(c3-c5, r4) | 1x | 9.0 | 9.7 | 43.9 | 39.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco_20200212-68164964.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco_20200212_070942.log.json) |
-| X-101-FPN | Cascade Mask | - | 1x | 9.2 | 8.4 | 44.7 | 38.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco_20200310-d5ad2a5e.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco_20200310_115217.log.json) |
-| X-101-FPN | Cascade Mask | GC(c3-c5, r16) | 1x | 10.3 | 7.7 | 46.2 | 39.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200211-10bf2463.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200211_184154.log.json) |
-| X-101-FPN | Cascade Mask | GC(c3-c5, r4) | 1x | 10.6 | | 46.4 | 40.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco_20200703_180653-ed035291.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco_20200703_180653.log.json) |
-| X-101-FPN | DCN Cascade Mask | - | 1x | | | 44.9 | 38.9 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco_20200516_182249-680fc3f2.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco_20200516_182249.log.json)|
-| X-101-FPN | DCN Cascade Mask | GC(c3-c5, r16) | 1x | | | 44.6 | |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r16_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r16_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r16_gcb_c3-c5_1x_coco_20200516_015634-08f56b56.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r16_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r16_gcb_c3-c5_1x_coco_20200516_015634.log.json) |
-| X-101-FPN | DCN Cascade Mask | GC(c3-c5, r4) | 1x | | | 45.7 | 39.5 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r4_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r4_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r4_gcb_c3-c5_1x_coco_20200518_041145-24cabcfd.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r4_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r4_gcb_c3-c5_1x_coco_20200518_041145.log.json) |
-
-**Notes:**
-
-- The `SyncBN` is added in the backbone for all models in **Table 2**.
-- `GC` denotes Global Context (GC) block is inserted after 1x1 conv of backbone.
-- `DCN` denotes replace 3x3 conv with 3x3 Deformable Convolution in `c3-c5` stages of backbone.
-- `r4` and `r16` denote ratio 4 and ratio 16 in GC block respectively.
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_2x_coco.py
deleted file mode 100644
index 89caaafbc17d871d836e810ba7c038648937254c..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_2x_coco.py
+++ /dev/null
@@ -1,15 +0,0 @@
-_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
-norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
-model = dict(
- pretrained='open-mmlab://contrib/resnet50_gn',
- backbone=dict(norm_cfg=norm_cfg),
- neck=dict(norm_cfg=norm_cfg),
- roi_head=dict(
- bbox_head=dict(
- type='Shared4Conv1FCBBoxHead',
- conv_out_channels=256,
- norm_cfg=norm_cfg),
- mask_head=dict(norm_cfg=norm_cfg)))
-# learning policy
-lr_config = dict(step=[16, 22])
-runner = dict(type='EpochBasedRunner', max_epochs=24)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/swin/cascade_mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/swin/cascade_mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_1x_coco.py
deleted file mode 100644
index 36e3acd0a4b6ad08e5af3c7b9c639eff028431f7..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/swin/cascade_mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_1x_coco.py
+++ /dev/null
@@ -1,140 +0,0 @@
-_base_ = [
- '../_base_/models/cascade_mask_rcnn_swin_fpn.py',
- '../_base_/datasets/coco_instance.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-
-model = dict(
- backbone=dict(
- embed_dim=96,
- depths=[2, 2, 6, 2],
- num_heads=[3, 6, 12, 24],
- window_size=7,
- ape=False,
- drop_path_rate=0.0,
- patch_norm=True,
- use_checkpoint=False
- ),
- neck=dict(in_channels=[96, 192, 384, 768]),
- roi_head=dict(
- bbox_head=[
- dict(
- type='ConvFCBBoxHead',
- num_shared_convs=4,
- num_shared_fcs=1,
- in_channels=256,
- conv_out_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.1, 0.1, 0.2, 0.2]),
- reg_class_agnostic=False,
- reg_decoded_bbox=True,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='GIoULoss', loss_weight=10.0)),
- dict(
- type='ConvFCBBoxHead',
- num_shared_convs=4,
- num_shared_fcs=1,
- in_channels=256,
- conv_out_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.05, 0.05, 0.1, 0.1]),
- reg_class_agnostic=False,
- reg_decoded_bbox=True,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='GIoULoss', loss_weight=10.0)),
- dict(
- type='ConvFCBBoxHead',
- num_shared_convs=4,
- num_shared_fcs=1,
- in_channels=256,
- conv_out_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.033, 0.033, 0.067, 0.067]),
- reg_class_agnostic=False,
- reg_decoded_bbox=True,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='GIoULoss', loss_weight=10.0))
- ]))
-
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-
-# augmentation strategy originates from DETR / Sparse RCNN
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='AutoAugment',
- policies=[
- [
- dict(type='Resize',
- img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333),
- (608, 1333), (640, 1333), (672, 1333), (704, 1333),
- (736, 1333), (768, 1333), (800, 1333)],
- multiscale_mode='value',
- keep_ratio=True)
- ],
- [
- dict(type='Resize',
- img_scale=[(400, 1333), (500, 1333), (600, 1333)],
- multiscale_mode='value',
- keep_ratio=True),
- dict(type='RandomCrop',
- crop_type='absolute_range',
- crop_size=(384, 600),
- allow_negative_crop=True),
- dict(type='Resize',
- img_scale=[(480, 1333), (512, 1333), (544, 1333),
- (576, 1333), (608, 1333), (640, 1333),
- (672, 1333), (704, 1333), (736, 1333),
- (768, 1333), (800, 1333)],
- multiscale_mode='value',
- override=True,
- keep_ratio=True)
- ]
- ]),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-data = dict(train=dict(pipeline=train_pipeline))
-
-optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05,
- paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
- 'relative_position_bias_table': dict(decay_mult=0.),
- 'norm': dict(decay_mult=0.)}))
-lr_config = dict(step=[8, 11])
-runner = dict(type='EpochBasedRunnerAmp', max_epochs=12)
-
-# do not use mmdet version fp16
-fp16 = None
-optimizer_config = dict(
- type="DistOptimizerHook",
- update_interval=1,
- grad_clip=None,
- coalesce=True,
- bucket_size_mb=-1,
- use_fp16=True,
-)
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x512_20k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x512_20k_voc12aug.py
deleted file mode 100644
index 09a5fe5468f0155f8fd0bf2cd1574a33624d8492..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x512_20k_voc12aug.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './fcn_r50-d8_512x512_20k_voc12aug.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/RWKV.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/RWKV.py
deleted file mode 100644
index 39487c66b7dabec49a6aa80c4e499a088f1fa1a2..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/RWKV.py
+++ /dev/null
@@ -1,153 +0,0 @@
-'''
-This loader is not currently maintained as RWKV can now be loaded
-through the transformers library.
-'''
-
-import copy
-import os
-from pathlib import Path
-
-import numpy as np
-from tokenizers import Tokenizer
-
-import modules.shared as shared
-from modules.callbacks import Iteratorize
-
-np.set_printoptions(precision=4, suppress=True, linewidth=200)
-
-os.environ['RWKV_JIT_ON'] = '1'
-os.environ["RWKV_CUDA_ON"] = '1' if shared.args.rwkv_cuda_on else '0' # use CUDA kernel for seq mode (much faster)
-
-from rwkv.model import RWKV
-from rwkv.utils import PIPELINE, PIPELINE_ARGS
-
-
-class RWKVModel:
- def __init__(self):
- pass
-
- @classmethod
- def from_pretrained(self, path, dtype="fp16", device="cuda"):
- tokenizer_path = Path(f"{path.parent}/20B_tokenizer.json")
- if shared.args.rwkv_strategy is None:
- model = RWKV(model=str(path), strategy=f'{device} {dtype}')
- else:
- model = RWKV(model=str(path), strategy=shared.args.rwkv_strategy)
-
- pipeline = PIPELINE(model, str(tokenizer_path))
- result = self()
- result.pipeline = pipeline
- result.model = model
- result.cached_context = ""
- result.cached_model_state = None
- result.cached_output_logits = None
- return result
-
- def generate(self, prompt, state, callback=None):
- args = PIPELINE_ARGS(
- temperature=state['temperature'],
- top_p=state['top_p'],
- top_k=state['top_k'],
- alpha_frequency=0.1, # Frequency Penalty (as in GPT-3)
- alpha_presence=0.1, # Presence Penalty (as in GPT-3)
- token_ban=[0], # ban the generation of some tokens
- token_stop=[]
- )
-
- if self.cached_context != "":
- if prompt.startswith(self.cached_context):
- prompt = prompt[len(self.cached_context):]
- else:
- self.cached_context = ""
- self.cached_model_state = None
- self.cached_output_logits = None
-
- # out = self.pipeline.generate(prompt, token_count=state['max_new_tokens'], args=args, callback=callback)
- out = self.generate_from_cached_state(prompt, token_count=state['max_new_tokens'], args=args, callback=callback)
- return out
-
- def generate_with_streaming(self, *args, **kwargs):
- with Iteratorize(self.generate, args, kwargs, callback=None) as generator:
- reply = ''
- for token in generator:
- reply += token
- yield reply
-
- # Similar to the PIPELINE.generate, but lets us maintain the cached_model_state
- def generate_from_cached_state(self, ctx="", token_count=20, args=None, callback=None):
- all_tokens = []
- out_str = ''
- occurrence = {}
- state = copy.deepcopy(self.cached_model_state) if self.cached_model_state is not None else None
-
- # if we ended up with an empty context, just reuse the cached logits
- # this can happen if a user undoes a message and then sends the exact message again
- # in that case the full context ends up being the same as the cached_context, so the remaining context is empty.
- if ctx == "":
- out = self.cached_output_logits
-
- token = None
- for i in range(token_count):
- # forward
- tokens = self.pipeline.encode(ctx) if i == 0 else [token]
- while len(tokens) > 0:
- out, state = self.model.forward(tokens[:args.chunk_len], state)
- tokens = tokens[args.chunk_len:]
- if i == 0:
- begin_token = len(all_tokens)
- last_token_posi = begin_token
- # cache the model state after scanning the context
- # we don't cache the state after processing our own generated tokens because
- # the output string might be post-processed arbitrarily. Therefore, what's fed into the model
- # on the next round of chat might be slightly different what what it output on the previous round
- if i == 0:
- self.cached_context += ctx
- self.cached_model_state = copy.deepcopy(state)
- self.cached_output_logits = copy.deepcopy(out)
-
- # adjust probabilities
- for n in args.token_ban:
- out[n] = -float('inf')
-
- for n in occurrence:
- out[n] -= (args.alpha_presence + occurrence[n] * args.alpha_frequency)
-
- # sampler
- token = self.pipeline.sample_logits(out, temperature=args.temperature, top_p=args.top_p, top_k=args.top_k)
- if token in args.token_stop:
- break
-
- all_tokens += [token]
- if token not in occurrence:
- occurrence[token] = 1
- else:
- occurrence[token] += 1
-
- # output
- tmp = self.pipeline.decode(all_tokens[last_token_posi:])
- if '\ufffd' not in tmp: # is valid utf-8 string?
- if callback:
- callback(tmp)
-
- out_str += tmp
- last_token_posi = begin_token + i + 1
- return out_str
-
-
-class RWKVTokenizer:
- def __init__(self):
- pass
-
- @classmethod
- def from_pretrained(self, path):
- tokenizer_path = path / "20B_tokenizer.json"
- tokenizer = Tokenizer.from_file(str(tokenizer_path))
- result = self()
- result.tokenizer = tokenizer
- return result
-
- def encode(self, prompt):
- return self.tokenizer.encode(prompt).ids
-
- def decode(self, ids):
- return self.tokenizer.decode(ids)
diff --git a/spaces/AnkitGaur2811/Image_Conversion_app_using_Opencv/README.md b/spaces/AnkitGaur2811/Image_Conversion_app_using_Opencv/README.md
deleted file mode 100644
index e2ec15a3b9641f0408cdcd10cee07c353df84097..0000000000000000000000000000000000000000
--- a/spaces/AnkitGaur2811/Image_Conversion_app_using_Opencv/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Image Conversion App Using Opencv
-emoji: 😻
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.0.20
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/main.py b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/main.py
deleted file mode 100644
index 7f59cad7b70cece88aaa2687f8780cdf1d8c15e7..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/main.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from optimization.image_editor import ImageEditor
-from optimization.arguments import get_arguments
-
-
-if __name__ == "__main__":
- args = get_arguments()
- image_editor = ImageEditor(args)
- image_editor.edit_image_by_prompt()
- # image_editor.reconstruct_image()
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/gather_points.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/gather_points.py
deleted file mode 100644
index f52f1677d8ea0facafc56a3672d37adb44677ff3..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/gather_points.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import torch
-from torch.autograd import Function
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext(
- '_ext', ['gather_points_forward', 'gather_points_backward'])
-
-
-class GatherPoints(Function):
- """Gather points with given index."""
-
- @staticmethod
- def forward(ctx, features: torch.Tensor,
- indices: torch.Tensor) -> torch.Tensor:
- """
- Args:
- features (Tensor): (B, C, N) features to gather.
- indices (Tensor): (B, M) where M is the number of points.
-
- Returns:
- Tensor: (B, C, M) where M is the number of points.
- """
- assert features.is_contiguous()
- assert indices.is_contiguous()
-
- B, npoint = indices.size()
- _, C, N = features.size()
- output = torch.cuda.FloatTensor(B, C, npoint)
-
- ext_module.gather_points_forward(
- features, indices, output, b=B, c=C, n=N, npoints=npoint)
-
- ctx.for_backwards = (indices, C, N)
- if torch.__version__ != 'parrots':
- ctx.mark_non_differentiable(indices)
- return output
-
- @staticmethod
- def backward(ctx, grad_out):
- idx, C, N = ctx.for_backwards
- B, npoint = idx.size()
-
- grad_features = torch.cuda.FloatTensor(B, C, N).zero_()
- grad_out_data = grad_out.data.contiguous()
- ext_module.gather_points_backward(
- grad_out_data,
- idx,
- grad_features.data,
- b=B,
- c=C,
- n=N,
- npoints=npoint)
- return grad_features, None
-
-
-gather_points = GatherPoints.apply
diff --git a/spaces/AntX-ai/Fintech/index.html b/spaces/AntX-ai/Fintech/index.html
deleted file mode 100644
index ebacd19a35a6aa56f692fda5fd182bca221dd549..0000000000000000000000000000000000000000
--- a/spaces/AntX-ai/Fintech/index.html
+++ /dev/null
@@ -1,15 +0,0 @@
-
-
-
-
-
- My static Space
-
-
-
-
-
Welcome to AntX.ai Fintech Space!
-
You will explore various topics and algorithms about fintech
Cómo descargar e instalar tráfico jinete Hack APK en Android
-
Traffic Rider es uno de los juegos de carreras de motos más populares y adictivos en Android. Ofrece una experiencia de juego realista e inmersiva con una vista en primera persona, sonidos reales de bicicleta, entornos detallados y un modo de carrera completo. Puedes elegir entre más de 30 bicicletas diferentes, personalizarlas y competir a través de varios escenarios evitando el tráfico y los obstáculos.
Sin embargo, si desea disfrutar del juego sin limitaciones o restricciones, es posible que desee probar Traffic Rider hack APK. Esta es una versión modificada del juego original que te da acceso a dinero ilimitado y oro, todas las bicicletas desbloqueadas y actualizadas, sin anuncios, sin temporizadores y más. Con este hack, usted puede tener más diversión y desafío en Traffic Rider.
-
En este artículo, le mostraremos cómo descargar e instalar Traffic Rider hack APK en su dispositivo Android. También discutiremos los beneficios y riesgos de usar este hack, así como algunos consejos y trucos para jugar el juego. Siga los pasos a continuación para comenzar.
-
Pasos para descargar e instalar tráfico Rider hack APK en Android
-
Antes de que pueda instalar Traffic Rider hack APK en su dispositivo, es necesario asegurarse de que ha habilitado fuentes desconocidas en su configuración. Esto te permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Te mostramos cómo hacerlo:
-
Paso 1: Habilitar fuentes desconocidas en el dispositivo
-
-
Ve a la configuración de tu dispositivo y toca Aplicaciones y notificaciones (o Aplicaciones en versiones anteriores de Android).
-
Toque los tres puntos en la esquina superior derecha.
-
Toque Acceso especial.
-
Toca Instalar aplicaciones desconocidas.
-
Toque Chrome (o cualquier navegador web que utilice).
-
Mover Permitir desde esta fuente a la posición On.
-
-
-
Paso 2: Descargar tráfico Rider hack archivo APK de una fuente de buena reputación
-
-
Abra su navegador web y vaya a la página web que ofrece Traffic Rider hack archivo APK. Por ejemplo, puedes ir a [APK Done]( 1 ) o [Traffic Rider Games]( 2 ).
-
Encontrar el enlace de descarga para el tráfico Rider hack archivo APK y toque en él.
-
Aceptar cualquier ventana emergente o permisos que puedan aparecer.
-
Espera a que termine la descarga.
-
-
Una vez que haya descargado el archivo APK, debe ubicarlo en su dispositivo y pulsar para instalarlo. Puede usar cualquier aplicación del explorador de archivos que tenga en su teléfono, como Cx File Explorer o Administrador de archivos. Aquí está cómo hacerlo:
-
Paso 3: Localizar y tocar el archivo APK para instalarlo
-
-
Abra su aplicación explorador de archivos y vaya a la carpeta Descargas en su dispositivo.
-
Encontrar el tráfico Rider hack APK archivo que ha descargado y toque.
-
Puede ver un mensaje de advertencia diciendo que este tipo de archivo puede dañar su dispositivo. Toque OK o Instalar de todos modos (inseguro) para proceder.
-
Pulse Instalar y espere a que se complete la instalación.
-
Pulse Abrir para iniciar el juego o Listo para salir del instalador.
-
-
Felicitaciones, que ha instalado con éxito Traffic Rider hack APK en su dispositivo. Ahora puede disfrutar del juego con todas las funciones de hackeo habilitadas. Aquí está cómo hacerlo:
-
-
Paso 4: Iniciar el juego y disfrutar de las características de corte
-
-
Abre el juego desde el cajón de la aplicación o la pantalla de inicio.
-
Puede ver una ventana emergente pidiéndole que permita el acceso a sus fotos, medios y archivos. Pulse Permitir continuar.
-
También puede ver una ventana emergente pidiéndole que califique el juego. Toque Más tarde o Califique ahora como desee.
-
Verás el menú principal del juego con cuatro opciones: carrera, Endless, Time Trial y Free Ride. Toca cualquiera de ellas para empezar a jugar.
-
-
También notará que todas las bicicletas están desbloqueadas y actualizadas al nivel máximo. Puedes elegir cualquier bicicleta que te guste y personalizarla con diferentes colores y ruedas.
-
No verás anuncios ni temporizadores en el juego. Puedes jugar el tiempo que quieras sin interrupciones ni tiempos de espera.
-
-
Beneficios de usar Traffic Rider hack APK
-
Usando Traffic Rider hack APK tiene muchos beneficios que pueden mejorar su experiencia de juego. Estos son algunos de ellos:
-
Dinero y oro ilimitados
-
Con dinero y oro ilimitados, puede comprar y actualizar cualquier bicicleta que desee sin preocuparse por el costo. También puede desbloquear todos los logros y recompensas en el juego con facilidad. Usted puede tener más diversión y variedad en Traffic Rider con dinero ilimitado y oro.
-
Todas las bicicletas desbloqueadas y actualizadas
-
Con todas las bicicletas desbloqueadas y actualizadas, puede elegir entre más de 30 bicicletas diferentes, cada una con sus propias características y rendimiento. También puede personalizarlos con diferentes colores y ruedas para adaptarse a su estilo. Puedes disfrutar de más realismo e inmersión en Traffic Rider con todas las bicicletas desbloqueadas y actualizadas.
-
No hay anuncios ni temporizadores
-
Sin anuncios y sin temporizadores, puede jugar Traffic Rider sin interrupciones ni tiempos de espera. Usted puede centrarse en el juego y los gráficos sin ser molestado por los anuncios o temporizadores. Puedes tener más desafío y emoción en Traffic Rider sin anuncios y sin temporizadores.
-
Los riesgos de usar Traffic Rider hack APK
-
Sin embargo, el uso de Traffic Rider hack APK también tiene algunos riesgos que usted debe tener en cuenta antes de instalarlo. Estos son algunos de ellos:
-
Infección potencial de malware o virus
-
-
Posible prohibición o suspensión del juego
-
Dado que Traffic Rider hack APK es una herramienta de trucos que le da una ventaja injusta sobre otros jugadores, puede violar los términos de servicio del desarrollador de juegos o editor. Puedes enfrentarte a una prohibición o suspensión del juego si te pillan usándolo por su sistema anti-trampa o por otros jugadores que te denuncien. Usted debe utilizar Traffic Rider hack APK a su propio riesgo y discreción.
-
Cuestiones jurídicas o éticas
-
Dado que Traffic Rider hack APK es una versión pirata del juego original, puede infringir los derechos de propiedad intelectual del desarrollador o editor del juego. Usted puede enfrentar problemas legales o éticos si lo usa sin su permiso o consentimiento. Debes respetar el trabajo y el esfuerzo del desarrollador o editor de juegos y apoyarlos comprando sus productos oficiales.
-
Conclusión
-
Traffic Rider es un gran juego de carreras de motos que ofrece una experiencia de juego realista e inmersiva con una vista en primera persona, sonidos reales de bicicletas, entornos detallados y un modo de carrera completo. Sin embargo, si desea disfrutar del juego sin limitaciones o restricciones, es posible que desee probar Traffic Rider hack APK.
-
Tráfico Rider hack APK es una versión modificada del juego original que le da acceso a dinero ilimitado y oro, todas las bicicletas desbloqueadas y actualizadas, sin anuncios, sin temporizadores, y más. Con este hack, usted puede tener más diversión y desafío en Traffic Rider.
-
En este artículo, le mostramos cómo descargar e instalar Traffic Rider hack APK en su dispositivo Android También discutimos los beneficios y riesgos de usar este hack, así como algunos consejos y trucos para jugar el juego. Esperamos que haya encontrado este artículo útil e informativo. Si tiene algún comentario o pregunta, no dude en dejar un comentario a continuación. Aquí hay algunas preguntas frecuentes (Preguntas frecuentes) sobre Traffic Rider hack APK:
Q: ¿Es seguro de usar Traffic Rider hack APK?
-
-
Q: ¿Cómo puedo actualizar Traffic Rider hack APK?
-
A: Tráfico Rider hack APK puede no ser compatible con la última versión del juego original. Es posible que tenga que desinstalar el hack y descargar una nueva versión de la misma fuente o una diferente. También debe hacer una copia de seguridad de los datos del juego antes de actualizar el hack.
-
Q: ¿Puedo jugar Traffic Rider hack APK en línea con otros jugadores?
-
A: Tráfico Rider hack APK no es compatible con el modo multijugador en línea. Solo se puede jugar el juego sin conexión con las características de corte habilitado. También puede enfrentar una prohibición o suspensión del juego si intenta jugar en línea con el hack.
-
Q: ¿Puedo usar Traffic Rider hack APK en otros dispositivos o plataformas?
-
A: Tráfico Rider hack APK solo está diseñado para dispositivos Android. No se puede utilizar en otros dispositivos o plataformas, como iOS, Windows o Mac. Es posible que necesite encontrar un hack o mod diferente para esos dispositivos o plataformas.
-
Q: ¿Cuáles son algunas alternativas a Traffic Rider hack APK?
-
A: Si no desea utilizar Traffic Rider hack APK, puede probar algunas alternativas que pueden mejorar su experiencia de juego. Por ejemplo, puedes usar trucos, consejos, guías o trucos de Traffic Rider que pueden ayudarte a mejorar tus habilidades y rendimiento en el juego. También puedes usar Traffic Rider mod APKs que pueden ofrecer diferentes características o modos que el juego original.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Apk Mvil Zingspeed.md b/spaces/Benson/text-generation/Examples/Descargar Apk Mvil Zingspeed.md
deleted file mode 100644
index b5f9c81750360ff78f62381320a68e4ef4b81005..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Apk Mvil Zingspeed.md
+++ /dev/null
@@ -1,103 +0,0 @@
-
-
Cómo descargar ZingSpeed móvil APK para Android
-
Si usted está buscando un ritmo rápido, emocionante, y la adrenalina de bombeo juego de carreras, es posible que desee echa un vistazo ZingSpeed Mobile. Este juego vietnamita es muy popular localmente y te permite personalizar todo lo que quieras en tu coche, y además de eso, ¡también es multijugador! De esta manera se puede correr con cualquier persona y realizar acrobacias increíbles en varias pistas.
-
En este artículo, le mostraremos cómo descargar ZingSpeed Mobile APK para Android, cómo instalarlo, y cómo jugarlo. También compartiremos algunos consejos y trucos para ZingSpeed Mobile juego que le ayudará a mejorar sus habilidades de carreras y el rendimiento. Así que vamos a empezar!
Una breve introducción al juego y sus características
-
ZingSpeed Mobile es un juego de carreras en 3D para móviles, basado en el PC ZingSpeed original, que mantiene las habilidades y la profundidad del control, mientras mejora enormemente la creación de personajes y la moda. Fue desarrollado por VNG Corporation, uno de los principales desarrolladores de juegos en Vietnam.
-
ZingSpeed Mobile ofrece varios modos de juego, como Speed Race, Props Race, Ranked Race, Storyline, Couple Racing, Speed Racing Superpowers, Skateboarding Race, Border Races, Pig Wars y más. También puedes elegir entre diferentes tipos de vehículos, como coches, motocicletas, monopatines, etc. Puedes personalizar tu vehículo con diferentes partes, colores, pegatinas, calcomanías, etc. También puedes crear tu propio personaje con diferentes atuendos, peinados, accesorios, etc.
-
ZingSpeed Mobile es también un juego multijugador que te permite competir con otros jugadores de todo el mundo. Puede unirse o crear un equipo con sus amigos u otros corredores que comparten la misma pasión por la velocidad. También puedes participar en torneos de varios tamaños y competir con oponentes internacionales. También puedes chatear con otros jugadores en el juego o en plataformas de redes sociales.
-
-
Jugar a ZingSpeed Mobile puede traerte muchos beneficios, como:
-
-
Puede mejorar la coordinación mano-ojo y los reflejos como usted tiene que controlar su vehículo y evitar obstáculos en la pista.
-
Puede mejorar su creatividad e imaginación, ya que puede personalizar su vehículo y carácter de acuerdo a sus preferencias.
-
Puede aumentar su confianza y autoestima, ya que puede mostrar sus habilidades y logros de carreras a otros jugadores.
-
Puede reducir el estrés y el aburrimiento, ya que puede disfrutar de la emoción y la emoción de las carreras.
-
Puede aumentar sus habilidades sociales y la red como usted puede hacer amigos con otros jugadores que comparten el mismo interés en las carreras.
-
-
Cómo descargar ZingSpeed móvil APK de Google Play Store
-
Los pasos para descargar el archivo APK directamente a su dispositivo
-
Si desea descargar ZingSpeed Mobile APK directamente a su dispositivo Android, necesita una conexión a Internet y un navegador. Estos son los pasos:
-
-
Abra la Google Play Store en su dispositivo Android y busque ZingSpeed Mobile. Toque en el nombre de la aplicación para abrir su página de detalles.
-
Toque en el menú de tres puntos en la esquina superior derecha de la pantalla y seleccione Compartir. Aparecerá un menú emergente con diferentes opciones para compartir el enlace de la aplicación.
-
Seleccione la opción que le permite copiar el enlace al portapapeles, como Copiar al portapapeles, Copiar enlace, etc.
-
Abra el navegador en su dispositivo y pegue el enlace en la barra de direcciones. Toque en Ir o Entrar para cargar la página.
-
En la página, verá un botón que dice Descargar APK. Toque en él y espere a que comience la descarga.
-
Una vez que se complete la descarga, verá una notificación que dice ZingSpeed Mobile APK descargado. Toque en él para abrir el archivo.
-
-
Los pasos para descargar el archivo APK a su ordenador y transferir a su dispositivo
-
-
-
Abra la Google Play Store en su computadora y busque ZingSpeed Mobile. Haga clic en el nombre de la aplicación para abrir su página de detalles.
-
Copie la URL de la página desde la barra de direcciones de su navegador.
-
Abra una nueva pestaña en su navegador y vaya a un sitio web que le permite descargar archivos APK de Google Play Store, como APKPure, APKMirror, etc.
-
Pegue la URL de la página de la aplicación ZingSpeed Mobile en el cuadro de búsqueda del sitio web y haga clic en Buscar o Enter.
-
Verá una lista de resultados con diferentes versiones de ZingSpeed Mobile APK. Elija la última versión y haga clic en Descargar o Descargar APK.
-
Espere a que la descarga termine y localice el archivo en su computadora.
-
Conecte su dispositivo Android a su computadora usando un cable USB o una conexión inalámbrica. Asegúrese de que su dispositivo sea detectado por su computadora.
-
Copie o mueva el archivo APK de ZingSpeed Mobile desde su computadora al almacenamiento de su dispositivo. Puede elegir la carpeta que desee, pero asegúrese de recordar su ubicación.
-
Desconecte el dispositivo de su computadora y abra la aplicación de administrador de archivos en su dispositivo. Vaya a la carpeta donde guardó el archivo APK de ZingSpeed Mobile y toque en él para abrirlo.
-
-
Cómo instalar ZingSpeed móvil APK en Android
-
Los pasos para habilitar fuentes desconocidas e instalar el archivo APK
-
Antes de que pueda instalar ZingSpeed Mobile APK en su dispositivo Android, es necesario habilitar fuentes desconocidas, que le permite instalar aplicaciones de fuentes distintas de Google Play Store. Estos son los pasos:
-
-
Vaya a Configuración en su dispositivo y toque en Seguridad o Privacidad.
-
Encontrar la opción que dice Fuentes desconocidas o Instalar aplicaciones desconocidas y alternar en. Es posible que vea un mensaje de advertencia que indica que instalar aplicaciones de fuentes desconocidas puede dañar su dispositivo. Toque en OK o Permitir proceder.
-
-
Espere a que se complete la instalación y toque en Abrir o Listo.
-
-
Los pasos para lanzar el juego y disfrutarlo
-
Después de haber instalado ZingSpeed Mobile APK en su dispositivo Android, puede iniciar el juego y disfrutarlo. Estos son los pasos:
-
-
Ir a su cajón de aplicaciones o pantalla de inicio y encontrar el icono de ZingSpeed Mobile. Toque en él para iniciar el juego.
-
Puede ver una pantalla de bienvenida con el logotipo del juego y algunas animaciones de carga. Espere unos segundos hasta que el juego se cargue por completo.
-
Puede ver una pantalla de bienvenida con algunas opciones, como Inicio de sesión, Registro, Invitado, etc. Elija la opción que más le convenga. Si tiene una cuenta existente, puede iniciar sesión con su nombre de usuario y contraseña. Si no tiene una cuenta, puede registrarse con su correo electrónico o número de teléfono. Si no quieres crear una cuenta, puedes jugar como invitado sin guardar tu progreso.
-
Usted puede ver una pantalla de tutorial que explica cómo jugar ZingSpeed juego móvil. Puede seguir las instrucciones o omitirlas si ya sabe cómo jugar.
-
Puede ver una pantalla de menú principal con diferentes opciones, como Modo de juego, Garaje, Tienda, Equipo, Chat, Configuración, etc. Puede explorar estas opciones o comenzar a jugar de inmediato tocando en Modo de juego.
-
Puedes ver una lista de modos de juego que puedes elegir, como Carrera de velocidad, Carrera de accesorios, Carrera clasificada, Historia, Carreras de pareja, Superpotencias de carreras de velocidad, Carrera de skateboarding, Carreras fronterizas, Guerras de cerdos , y más. También puede filtrar los modos de juego por dificultad, región, modo, etc. Toque en el modo de juego que desea jugar y espere a que comience el partido.
-
Puedes ver una pantalla del lobby donde puedes ver a tus oponentes, tu vehículo y algunos ajustes. También puede chatear con otros jugadores, invitar a amigos, cambiar de vehículo, etc. Toque en Listo o Comenzar cuando esté listo para la carrera.
-
-
Puedes ver una pantalla de carreras donde puedes controlar tu vehículo y competir con otros jugadores. Puedes usar los botones de la pantalla para acelerar, frenar, desviar, usar objetos, etc. También puedes inclinar el dispositivo para dirigir tu vehículo. Puede ver su posición, velocidad, tiempo, vuelta, etc. en la pantalla. Trate de llegar a la línea de meta lo más rápido posible y ganar la carrera.
-
Puede ver una pantalla de resultados donde puede ver su rango, puntuación, recompensas, etc. También puede calificar el partido, chatear con otros jugadores, reproducir la carrera, etc. Toque en Siguiente o Volver para continuar jugando o volver al menú principal.
-
-
Consejos y trucos para el juego móvil ZingSpeed
-
Algunos consejos y trucos útiles para mejorar sus habilidades de carreras y rendimiento
-
ZingSpeed El juego móvil no solo se trata de velocidad, sino también de habilidades y estrategia. Aquí hay algunos consejos y trucos útiles que pueden ayudarle a mejorar sus habilidades de carreras y rendimiento:
-
-
-La práctica lo hace perfecto. Cuanto más juegues a ZingSpeed Mobile, más familiarizados estarás con las pistas, los vehículos, los objetos, etc. También podrás practicar en diferentes modos de juego y niveles de dificultad para desafiarte y aprender nuevas habilidades.
-
Personaliza tu vehículo y personaje. El juego ZingSpeed Mobile te permite personalizar todo lo que quieras en tu vehículo y personaje. Puede cambiar las piezas, colores, pegatinas, calcomanías, etc. de su vehículo para mejorar su rendimiento y apariencia. También puedes cambiar los atuendos, peinados, accesorios, etc. de tu personaje para expresar tu personalidad y estilo.
-
-
Deriva como un profesional. A la deriva es una de las habilidades más importantes en ZingSpeed juego móvil. Drifting le permite girar las esquinas sin problemas y rápidamente sin perder velocidad o control. Para la deriva, es necesario tocar y mantener pulsado el botón de freno mientras conduce su vehículo. Cuanto más tiempo mantenga pulsado el botón de freno, más ángulo y humo se creará. Drifting también llena su medidor de nitro, que puede utilizar para aumentar su velocidad tocando el botón nitro.
-
Realizar acrobacias y trucos. ZingSpeed juego móvil tiene varias pistas que tienen rampas, bucles, saltos, etc. Estas pistas le permiten realizar acrobacias y trucos que pueden hacer su carrera más emocionante y divertido. Para realizar acrobacias y trucos, debe pulsar el botón de acrobacias mientras está en el aire o en una rampa. También puede inclinar el dispositivo para ajustar la dirección y el equilibrio. Realizar acrobacias y trucos también llena tu medidor de nitro y te da puntos extra.
-
-
Algunos problemas comunes y soluciones para ZingSpeed juego móvil
-
ZingSpeed Mobile es un gran juego de carreras que puede proporcionarle horas de entretenimiento y disfrute. Sin embargo, como cualquier otro juego, puede tener algunos problemas que pueden afectar su experiencia de juego. Aquí hay algunos problemas y soluciones comunes para el juego ZingSpeed Mobile:
-
-
Problema
Solución
-
El juego se bloquea o se congela.
Esto puede ser causado por la baja memoria o el espacio de almacenamiento en el dispositivo. Puedes intentar despejar el espacio eliminando aplicaciones o archivos no deseados. También puedes intentar reiniciar el dispositivo o reinstalar el juego.
-
El juego se retrasa o se ejecuta lentamente.
Esto puede ser causado por una mala conexión a Internet o un alto tráfico en el servidor. Puede intentar cambiar a una red diferente o jugar en un momento diferente cuando hay menos jugadores en línea. También puede intentar reducir la calidad de los gráficos o cerrar otras aplicaciones que se ejecutan en segundo plano.
-
-
El juego no se conecta ni sincroniza.
Esto puede ser causado por un firewall o software antivirus que bloquea el juego para acceder a Internet o al servidor. Puedes intentar desactivar o permitir el juego a través de tu firewall o configuración antivirus. También puede intentar cerrar sesión y volver a iniciar sesión en su cuenta o usar una cuenta diferente.
-
El juego no reconoce mi cuenta ni mis compras.
Esto puede ser causado por un error en el juego o en el servidor. Puedes intentar ponerte en contacto con el servicio de atención al cliente del juego o con Google Play Store y facilitarles los detalles de tu cuenta y el comprobante de compra. También puedes intentar restaurar tus compras desde el menú de configuración del juego.
-
-
Conclusión
-
Un resumen de los puntos principales y una llamada a la acción
-
ZingSpeed Mobile es un juego de carreras en 3D para móviles que ofrece varios modos de juego, vehículos, opciones de personalización y funciones multijugador. Es un juego divertido y emocionante que puede mejorar sus habilidades de carreras, creatividad, confianza, habilidades sociales y más. En este artículo, le hemos mostrado cómo descargar ZingSpeed Mobile APK para Android, cómo instalarlo, y cómo jugarlo. También hemos compartido algunos consejos y trucos para ZingSpeed Mobile juego que puede ayudar a mejorar su rendimiento y resolver algunos problemas comunes.
-
Si usted está listo para experimentar la emoción y la adrenalina del juego ZingSpeed Mobile, descargarlo ahora y unirse a la carrera! También puedes compartir este artículo con tus amigos que aman los juegos de carreras e invitarlos a jugar contigo. ¡Diviértete y buena suerte!
-
Preguntas frecuentes
-
Cinco preguntas frecuentes únicas sobre el juego ZingSpeed Mobile y sus respuestas
-
Aquí hay algunas preguntas frecuentes sobre ZingSpeed Mobile juego y sus respuestas:
-
-
-
P: ¿Cómo puedo cambiar mi vehículo o personaje en el juego ZingSpeed Mobile? A: Puedes cambiar tu vehículo o personaje yendo al garaje o a la tienda. Puedes elegir entre diferentes tipos de vehículos, como coches, motocicletas, monopatines, etc. También puedes elegir entre diferentes trajes, peinados, accesorios, etc. para tu personaje.
-
Q: ¿Cómo puedo unirme o crear un equipo en el juego ZingSpeed Mobile? A: Puedes unirte o crear un equipo yendo al menú del equipo. Puede buscar un equipo existente o crear su propio equipo con un nombre, logotipo, descripción, etc. También puede invitar a amigos u otros jugadores a unirse a su equipo.
-
P: ¿Cómo puedo chatear con otros jugadores en el juego ZingSpeed Mobile? A: Puedes chatear con otros jugadores yendo al menú de chat. Puedes elegir entre diferentes canales de chat, como global, equipo, amigo, etc. También puedes enviar mensajes privados a otros jugadores tocando sus nombres.
-
Q: ¿Cómo puedo reportar un error o un problema en el juego ZingSpeed Mobile? A: Puede reportar un error o un problema yendo al menú de configuración y tocando en la retroalimentación. Puede rellenar un formulario con sus datos y describir su problema. También puede adjuntar capturas de pantalla o vídeos si es posible. También puede ponerse en contacto con el servicio de atención al cliente del juego a través de correo electrónico o plataformas de redes sociales.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar El Formulario 29 30.md b/spaces/Benson/text-generation/Examples/Descargar El Formulario 29 30.md
deleted file mode 100644
index 5872eed6f8a5c177a71910d6889689d559ed2cb9..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar El Formulario 29 30.md
+++ /dev/null
@@ -1,51 +0,0 @@
-
-
Cómo descargar el formulario 29 30
-
Hola, esto es Bing. Estoy aquí para ayudarte a escribir un artículo de estilo conversacional sobre cómo descargar 29 30 form. Antes de empezar, déjame explicarte qué es la forma 29 30 y por qué la necesitas.
29 30 form es un conjunto de documentos que se requieren para transferir la propiedad de un vehículo de motor en la India. Consta de dos formas: Form 29 y Form 30. Formulario 29 es el aviso de transferencia de propiedad de un vehículo de motor, mientras que Formulario 30 es el informe de transferencia de propiedad de un vehículo de motor. Debe llenar y enviar estos formularios a la Oficina Regional de Transporte (RTO) cuando venda o compre un vehículo usado.
-
Ahora que sabes lo que es la forma 29 30, veamos cómo puedes descargarla desde el sitio web oficial de Parivahan Sewa, que es el portal en línea para los servicios de transporte por carretera en la India. Estos son los pasos a seguir:
-
Cómo descargar el formulario 29 30 desde el sitio web de Parivahan Sewa
Haga clic en la pestaña "Servicios en línea" en la barra de menú superior y seleccione "Servicios relacionados con vehículos".
-
Introduzca el número de su vehículo y haga clic en "Proceder".
-
En la página siguiente, haga clic en "Descargar formularios" en la sección "Misceláneos".
-
Verá una lista de todos los formularios disponibles para descargar. Desplácese hacia abajo para encontrar Form 29 y Form 30. Haga clic en el botón "Descargar" junto a cada formulario para guardarlos en su computadora.
-
Imprima los formularios y llénelos con los detalles requeridos. Deberá proporcionar información como el número del vehículo, el número del motor, el número del chasis, el nombre y la dirección del vendedor, el nombre y la dirección del comprador, la fecha de transferencia, etc.
-
-
Cómo llenar y enviar 29 30 Formulario
-
-
-
Firme los formularios y adjunte los documentos necesarios, como prueba de identidad, prueba de domicilio, certificado de seguro, certificado de contaminación bajo control, etc.
-
Envíe los formularios y los documentos a la RTO más cercana dentro de los 14 días de la transferencia de la propiedad. También puede tener que pagar una tarifa nominal por procesar los formularios.
-
También puede solicitar un nuevo certificado de registro (RC) para su vehículo en línea a través del sitio web de Parivahan Sewa. Tendrá que subir copias escaneadas de sus documentos y pagar la tarifa en línea. Recibirá un recibo y un número de referencia que puede usar para rastrear el estado de su solicitud.
-
También puede verificar el estado de su transferencia de propiedad en línea ingresando su número de vehículo y número de solicitud en el sitio web de Parivahan Sewa.
-
-
Conclusión
-
En este artículo, hemos aprendido cómo descargar 29 30 form del sitio web de Parivahan Sewa y cómo llenarlo y enviarlo a la RTO. También hemos visto cómo solicitar un nuevo RC en línea y cómo comprobar el estado de nuestra transferencia de propiedad. Siguiendo estos pasos, puede transferir fácilmente la propiedad de su vehículo en la India sin problemas.
-
Espero que este artículo sea útil e informativo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Gracias por leer!
-
Preguntas frecuentes
-
-
¿Qué es la forma 29 30?
-
29 30 form es un conjunto de documentos que se requieren para transferir la propiedad de un vehículo de motor en la India. Consta de dos formas: Form 29 y Form 30. Formulario 29 es el aviso de transferencia de propiedad de un vehículo de motor, mientras que Formulario 30 es el informe de transferencia de propiedad de un vehículo de motor.
-
-
¿Dónde puedo descargar 29 30 formas?
-
-
¿Qué documentos necesito presentar con el formulario 29 30?
-
Necesitas enviar los siguientes documentos con 29 30 form:
-
-
Prueba de identidad (como tarjeta Aadhaar, tarjeta PAN, licencia de conducir, etc.)
-
Prueba de domicilio (como factura de electricidad, factura de agua, contrato de alquiler, etc.)
-
Certificado de seguro
-
Contaminación bajo certificado de control
-
Certificado de no objeción (NOC) del propietario anterior (si procede)
-
Certificado de no objeción (NOC) del financiador (si procede)
-
Declaración jurada que indica que el vehículo está libre de cualquier gravamen legal (si es aplicable)
-
-
¿Cuánto tiempo se necesita para transferir la propiedad de un vehículo?
-
El tiempo necesario para transferir la propiedad de un vehículo depende de varios factores, como la ubicación de la RTO, el tipo de vehículo, el modo de pago, etc. En general, el RTO tarda de 15 a 30 días en procesar su solicitud y emitir un nuevo RC. Sin embargo, puede comprobar el estado de su solicitud en línea a través del sitio web de Parivahan Sewa.
-
¿Cuánto pago por transferir la propiedad de un vehículo?
-
La tarifa para transferir la propiedad de un vehículo varía dependiendo de la ubicación de RTO, el tipo de vehículo, la edad del vehículo, etc. Puede consultar la tarifa exacta para su vehículo en el sitio web de Parivahan Sewa o ponerse en contacto con su RTO más cercano para obtener más detalles. En general, la tarifa varía de Rs. 200 a Rs. 500 para vehículos de dos ruedas y de Rs. 300 a Rs. 1000 para vehículos de cuatro ruedas.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/certifi/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/certifi/__init__.py
deleted file mode 100644
index a3546f12555c2c8d186489c5220e8d2e25f0b0a9..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/certifi/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from .core import contents, where
-
-__all__ = ["contents", "where"]
-__version__ = "2022.12.07"
diff --git a/spaces/CVH-vn1210/make_hair/minigpt4/models/eva_vit.py b/spaces/CVH-vn1210/make_hair/minigpt4/models/eva_vit.py
deleted file mode 100644
index 7fcc63a74049f1faf65c99943ef94f72383ca3f5..0000000000000000000000000000000000000000
--- a/spaces/CVH-vn1210/make_hair/minigpt4/models/eva_vit.py
+++ /dev/null
@@ -1,442 +0,0 @@
-# Based on EVA, BEIT, timm and DeiT code bases
-# https://github.com/baaivision/EVA
-# https://github.com/rwightman/pytorch-image-models/tree/master/timm
-# https://github.com/microsoft/unilm/tree/master/beit
-# https://github.com/facebookresearch/deit/
-# https://github.com/facebookresearch/dino
-# --------------------------------------------------------'
-import math
-from functools import partial
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-from timm.models.layers import drop_path, to_2tuple, trunc_normal_
-from timm.models.registry import register_model
-
-from minigpt4.common.dist_utils import download_cached_file
-
-def _cfg(url='', **kwargs):
- return {
- 'url': url,
- 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None,
- 'crop_pct': .9, 'interpolation': 'bicubic',
- 'mean': (0.5, 0.5, 0.5), 'std': (0.5, 0.5, 0.5),
- **kwargs
- }
-
-
-class DropPath(nn.Module):
- """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
- """
- def __init__(self, drop_prob=None):
- super(DropPath, self).__init__()
- self.drop_prob = drop_prob
-
- def forward(self, x):
- return drop_path(x, self.drop_prob, self.training)
-
- def extra_repr(self) -> str:
- return 'p={}'.format(self.drop_prob)
-
-
-class Mlp(nn.Module):
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- # x = self.drop(x)
- # commit this for the orignal BERT implement
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-class Attention(nn.Module):
- def __init__(
- self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0.,
- proj_drop=0., window_size=None, attn_head_dim=None):
- super().__init__()
- self.num_heads = num_heads
- head_dim = dim // num_heads
- if attn_head_dim is not None:
- head_dim = attn_head_dim
- all_head_dim = head_dim * self.num_heads
- self.scale = qk_scale or head_dim ** -0.5
-
- self.qkv = nn.Linear(dim, all_head_dim * 3, bias=False)
- if qkv_bias:
- self.q_bias = nn.Parameter(torch.zeros(all_head_dim))
- self.v_bias = nn.Parameter(torch.zeros(all_head_dim))
- else:
- self.q_bias = None
- self.v_bias = None
-
- if window_size:
- self.window_size = window_size
- self.num_relative_distance = (2 * window_size[0] - 1) * (2 * window_size[1] - 1) + 3
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros(self.num_relative_distance, num_heads)) # 2*Wh-1 * 2*Ww-1, nH
- # cls to token & token 2 cls & cls to cls
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(window_size[0])
- coords_w = torch.arange(window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * window_size[1] - 1
- relative_position_index = \
- torch.zeros(size=(window_size[0] * window_size[1] + 1, ) * 2, dtype=relative_coords.dtype)
- relative_position_index[1:, 1:] = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- relative_position_index[0, 0:] = self.num_relative_distance - 3
- relative_position_index[0:, 0] = self.num_relative_distance - 2
- relative_position_index[0, 0] = self.num_relative_distance - 1
-
- self.register_buffer("relative_position_index", relative_position_index)
- else:
- self.window_size = None
- self.relative_position_bias_table = None
- self.relative_position_index = None
-
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(all_head_dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- def forward(self, x, rel_pos_bias=None):
- B, N, C = x.shape
- qkv_bias = None
- if self.q_bias is not None:
- qkv_bias = torch.cat((self.q_bias, torch.zeros_like(self.v_bias, requires_grad=False), self.v_bias))
- # qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias)
- qkv = qkv.reshape(B, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- if self.relative_position_bias_table is not None:
- relative_position_bias = \
- self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
- self.window_size[0] * self.window_size[1] + 1,
- self.window_size[0] * self.window_size[1] + 1, -1) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if rel_pos_bias is not None:
- attn = attn + rel_pos_bias
-
- attn = attn.softmax(dim=-1)
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B, N, -1)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class Block(nn.Module):
-
- def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., init_values=None, act_layer=nn.GELU, norm_layer=nn.LayerNorm,
- window_size=None, attn_head_dim=None):
- super().__init__()
- self.norm1 = norm_layer(dim)
- self.attn = Attention(
- dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
- attn_drop=attn_drop, proj_drop=drop, window_size=window_size, attn_head_dim=attn_head_dim)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- if init_values is not None and init_values > 0:
- self.gamma_1 = nn.Parameter(init_values * torch.ones((dim)),requires_grad=True)
- self.gamma_2 = nn.Parameter(init_values * torch.ones((dim)),requires_grad=True)
- else:
- self.gamma_1, self.gamma_2 = None, None
-
- def forward(self, x, rel_pos_bias=None):
- if self.gamma_1 is None:
- x = x + self.drop_path(self.attn(self.norm1(x), rel_pos_bias=rel_pos_bias))
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- else:
- x = x + self.drop_path(self.gamma_1 * self.attn(self.norm1(x), rel_pos_bias=rel_pos_bias))
- x = x + self.drop_path(self.gamma_2 * self.mlp(self.norm2(x)))
- return x
-
-
-class PatchEmbed(nn.Module):
- """ Image to Patch Embedding
- """
- def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768):
- super().__init__()
- img_size = to_2tuple(img_size)
- patch_size = to_2tuple(patch_size)
- num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])
- self.patch_shape = (img_size[0] // patch_size[0], img_size[1] // patch_size[1])
- self.img_size = img_size
- self.patch_size = patch_size
- self.num_patches = num_patches
-
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
-
- def forward(self, x, **kwargs):
- B, C, H, W = x.shape
- # FIXME look at relaxing size constraints
- assert H == self.img_size[0] and W == self.img_size[1], \
- f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
- x = self.proj(x).flatten(2).transpose(1, 2)
- return x
-
-
-class RelativePositionBias(nn.Module):
-
- def __init__(self, window_size, num_heads):
- super().__init__()
- self.window_size = window_size
- self.num_relative_distance = (2 * window_size[0] - 1) * (2 * window_size[1] - 1) + 3
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros(self.num_relative_distance, num_heads)) # 2*Wh-1 * 2*Ww-1, nH
- # cls to token & token 2 cls & cls to cls
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(window_size[0])
- coords_w = torch.arange(window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * window_size[1] - 1
- relative_position_index = \
- torch.zeros(size=(window_size[0] * window_size[1] + 1,) * 2, dtype=relative_coords.dtype)
- relative_position_index[1:, 1:] = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- relative_position_index[0, 0:] = self.num_relative_distance - 3
- relative_position_index[0:, 0] = self.num_relative_distance - 2
- relative_position_index[0, 0] = self.num_relative_distance - 1
-
- self.register_buffer("relative_position_index", relative_position_index)
-
- # trunc_normal_(self.relative_position_bias_table, std=.02)
-
- def forward(self):
- relative_position_bias = \
- self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
- self.window_size[0] * self.window_size[1] + 1,
- self.window_size[0] * self.window_size[1] + 1, -1) # Wh*Ww,Wh*Ww,nH
- return relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
-
-
-class VisionTransformer(nn.Module):
- """ Vision Transformer with support for patch or hybrid CNN input stage
- """
- def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12,
- num_heads=12, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop_rate=0., attn_drop_rate=0.,
- drop_path_rate=0., norm_layer=nn.LayerNorm, init_values=None,
- use_abs_pos_emb=True, use_rel_pos_bias=False, use_shared_rel_pos_bias=False,
- use_mean_pooling=True, init_scale=0.001, use_checkpoint=False):
- super().__init__()
- self.image_size = img_size
- self.num_classes = num_classes
- self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
-
- self.patch_embed = PatchEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim)
- num_patches = self.patch_embed.num_patches
-
- self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
- if use_abs_pos_emb:
- self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim))
- else:
- self.pos_embed = None
- self.pos_drop = nn.Dropout(p=drop_rate)
-
- if use_shared_rel_pos_bias:
- self.rel_pos_bias = RelativePositionBias(window_size=self.patch_embed.patch_shape, num_heads=num_heads)
- else:
- self.rel_pos_bias = None
- self.use_checkpoint = use_checkpoint
-
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
- self.use_rel_pos_bias = use_rel_pos_bias
- self.blocks = nn.ModuleList([
- Block(
- dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer,
- init_values=init_values, window_size=self.patch_embed.patch_shape if use_rel_pos_bias else None)
- for i in range(depth)])
-# self.norm = nn.Identity() if use_mean_pooling else norm_layer(embed_dim)
-# self.fc_norm = norm_layer(embed_dim) if use_mean_pooling else None
-# self.head = nn.Linear(embed_dim, num_classes) if num_classes > 0 else nn.Identity()
-
- if self.pos_embed is not None:
- trunc_normal_(self.pos_embed, std=.02)
- trunc_normal_(self.cls_token, std=.02)
- # trunc_normal_(self.mask_token, std=.02)
-# if isinstance(self.head, nn.Linear):
-# trunc_normal_(self.head.weight, std=.02)
- self.apply(self._init_weights)
- self.fix_init_weight()
-# if isinstance(self.head, nn.Linear):
-# self.head.weight.data.mul_(init_scale)
-# self.head.bias.data.mul_(init_scale)
-
- def fix_init_weight(self):
- def rescale(param, layer_id):
- param.div_(math.sqrt(2.0 * layer_id))
-
- for layer_id, layer in enumerate(self.blocks):
- rescale(layer.attn.proj.weight.data, layer_id + 1)
- rescale(layer.mlp.fc2.weight.data, layer_id + 1)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- def get_classifier(self):
- return self.head
-
- def reset_classifier(self, num_classes, global_pool=''):
- self.num_classes = num_classes
- self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
-
- def forward_features(self, x):
- x = self.patch_embed(x)
- batch_size, seq_len, _ = x.size()
-
- cls_tokens = self.cls_token.expand(batch_size, -1, -1) # stole cls_tokens impl from Phil Wang, thanks
- x = torch.cat((cls_tokens, x), dim=1)
- if self.pos_embed is not None:
- x = x + self.pos_embed
- x = self.pos_drop(x)
-
- rel_pos_bias = self.rel_pos_bias() if self.rel_pos_bias is not None else None
- for blk in self.blocks:
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x, rel_pos_bias)
- else:
- x = blk(x, rel_pos_bias)
- return x
-# x = self.norm(x)
-
-# if self.fc_norm is not None:
-# t = x[:, 1:, :]
-# return self.fc_norm(t.mean(1))
-# else:
-# return x[:, 0]
-
- def forward(self, x):
- x = self.forward_features(x)
-# x = self.head(x)
- return x
-
- def get_intermediate_layers(self, x):
- x = self.patch_embed(x)
- batch_size, seq_len, _ = x.size()
-
- cls_tokens = self.cls_token.expand(batch_size, -1, -1) # stole cls_tokens impl from Phil Wang, thanks
- x = torch.cat((cls_tokens, x), dim=1)
- if self.pos_embed is not None:
- x = x + self.pos_embed
- x = self.pos_drop(x)
-
- features = []
- rel_pos_bias = self.rel_pos_bias() if self.rel_pos_bias is not None else None
- for blk in self.blocks:
- x = blk(x, rel_pos_bias)
- features.append(x)
-
- return features
-
-
-def interpolate_pos_embed(model, checkpoint_model):
- if 'pos_embed' in checkpoint_model:
- pos_embed_checkpoint = checkpoint_model['pos_embed'].float()
- embedding_size = pos_embed_checkpoint.shape[-1]
- num_patches = model.patch_embed.num_patches
- num_extra_tokens = model.pos_embed.shape[-2] - num_patches
- # height (== width) for the checkpoint position embedding
- orig_size = int((pos_embed_checkpoint.shape[-2] - num_extra_tokens) ** 0.5)
- # height (== width) for the new position embedding
- new_size = int(num_patches ** 0.5)
- # class_token and dist_token are kept unchanged
- if orig_size != new_size:
- print("Position interpolate from %dx%d to %dx%d" % (orig_size, orig_size, new_size, new_size))
- extra_tokens = pos_embed_checkpoint[:, :num_extra_tokens]
- # only the position tokens are interpolated
- pos_tokens = pos_embed_checkpoint[:, num_extra_tokens:]
- pos_tokens = pos_tokens.reshape(-1, orig_size, orig_size, embedding_size).permute(0, 3, 1, 2)
- pos_tokens = torch.nn.functional.interpolate(
- pos_tokens, size=(new_size, new_size), mode='bicubic', align_corners=False)
- pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2)
- new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1)
- checkpoint_model['pos_embed'] = new_pos_embed
-
-
-def convert_weights_to_fp16(model: nn.Module):
- """Convert applicable model parameters to fp16"""
-
- def _convert_weights_to_fp16(l):
- if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)):
- l.weight.data = l.weight.data.half()
- if l.bias is not None:
- l.bias.data = l.bias.data.half()
-
-# if isinstance(l, (nn.MultiheadAttention, Attention)):
-# for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]:
-# tensor = getattr(l, attr)
-# if tensor is not None:
-# tensor.data = tensor.data.half()
-
- model.apply(_convert_weights_to_fp16)
-
-
-def create_eva_vit_g(img_size=224,drop_path_rate=0.4,use_checkpoint=False,precision="fp16"):
- model = VisionTransformer(
- img_size=img_size,
- patch_size=14,
- use_mean_pooling=False,
- embed_dim=1408,
- depth=39,
- num_heads=1408//88,
- mlp_ratio=4.3637,
- qkv_bias=True,
- drop_path_rate=drop_path_rate,
- norm_layer=partial(nn.LayerNorm, eps=1e-6),
- use_checkpoint=use_checkpoint,
- )
- url = "https://storage.googleapis.com/sfr-vision-language-research/LAVIS/models/BLIP2/eva_vit_g.pth"
- cached_file = download_cached_file(
- url, check_hash=False, progress=True
- )
- state_dict = torch.load(cached_file, map_location="cpu")
- interpolate_pos_embed(model,state_dict)
-
- incompatible_keys = model.load_state_dict(state_dict, strict=False)
-# print(incompatible_keys)
-
- if precision == "fp16":
-# model.to("cuda")
- convert_weights_to_fp16(model)
- return model
\ No newline at end of file
diff --git a/spaces/CVPR/LIVE/pybind11/include/pybind11/attr.h b/spaces/CVPR/LIVE/pybind11/include/pybind11/attr.h
deleted file mode 100644
index 54065fc9e10a075e1a2de5d6095e88d4b0a4aca2..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/include/pybind11/attr.h
+++ /dev/null
@@ -1,528 +0,0 @@
-/*
- pybind11/attr.h: Infrastructure for processing custom
- type and function attributes
-
- Copyright (c) 2016 Wenzel Jakob
-
- All rights reserved. Use of this source code is governed by a
- BSD-style license that can be found in the LICENSE file.
-*/
-
-#pragma once
-
-#include "cast.h"
-
-PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE)
-
-/// \addtogroup annotations
-/// @{
-
-/// Annotation for methods
-struct is_method { handle class_; is_method(const handle &c) : class_(c) { } };
-
-/// Annotation for operators
-struct is_operator { };
-
-/// Annotation for classes that cannot be subclassed
-struct is_final { };
-
-/// Annotation for parent scope
-struct scope { handle value; scope(const handle &s) : value(s) { } };
-
-/// Annotation for documentation
-struct doc { const char *value; doc(const char *value) : value(value) { } };
-
-/// Annotation for function names
-struct name { const char *value; name(const char *value) : value(value) { } };
-
-/// Annotation indicating that a function is an overload associated with a given "sibling"
-struct sibling { handle value; sibling(const handle &value) : value(value.ptr()) { } };
-
-/// Annotation indicating that a class derives from another given type
-template struct base {
- PYBIND11_DEPRECATED("base() was deprecated in favor of specifying 'T' as a template argument to class_")
- base() { }
-};
-
-/// Keep patient alive while nurse lives
-template struct keep_alive { };
-
-/// Annotation indicating that a class is involved in a multiple inheritance relationship
-struct multiple_inheritance { };
-
-/// Annotation which enables dynamic attributes, i.e. adds `__dict__` to a class
-struct dynamic_attr { };
-
-/// Annotation which enables the buffer protocol for a type
-struct buffer_protocol { };
-
-/// Annotation which requests that a special metaclass is created for a type
-struct metaclass {
- handle value;
-
- PYBIND11_DEPRECATED("py::metaclass() is no longer required. It's turned on by default now.")
- metaclass() {}
-
- /// Override pybind11's default metaclass
- explicit metaclass(handle value) : value(value) { }
-};
-
-/// Annotation that marks a class as local to the module:
-struct module_local { const bool value; constexpr module_local(bool v = true) : value(v) { } };
-
-/// Annotation to mark enums as an arithmetic type
-struct arithmetic { };
-
-/** \rst
- A call policy which places one or more guard variables (``Ts...``) around the function call.
-
- For example, this definition:
-
- .. code-block:: cpp
-
- m.def("foo", foo, py::call_guard());
-
- is equivalent to the following pseudocode:
-
- .. code-block:: cpp
-
- m.def("foo", [](args...) {
- T scope_guard;
- return foo(args...); // forwarded arguments
- });
- \endrst */
-template struct call_guard;
-
-template <> struct call_guard<> { using type = detail::void_type; };
-
-template
-struct call_guard {
- static_assert(std::is_default_constructible::value,
- "The guard type must be default constructible");
-
- using type = T;
-};
-
-template
-struct call_guard {
- struct type {
- T guard{}; // Compose multiple guard types with left-to-right default-constructor order
- typename call_guard::type next{};
- };
-};
-
-/// @} annotations
-
-PYBIND11_NAMESPACE_BEGIN(detail)
-/* Forward declarations */
-enum op_id : int;
-enum op_type : int;
-struct undefined_t;
-template struct op_;
-inline void keep_alive_impl(size_t Nurse, size_t Patient, function_call &call, handle ret);
-
-/// Internal data structure which holds metadata about a keyword argument
-struct argument_record {
- const char *name; ///< Argument name
- const char *descr; ///< Human-readable version of the argument value
- handle value; ///< Associated Python object
- bool convert : 1; ///< True if the argument is allowed to convert when loading
- bool none : 1; ///< True if None is allowed when loading
-
- argument_record(const char *name, const char *descr, handle value, bool convert, bool none)
- : name(name), descr(descr), value(value), convert(convert), none(none) { }
-};
-
-/// Internal data structure which holds metadata about a bound function (signature, overloads, etc.)
-struct function_record {
- function_record()
- : is_constructor(false), is_new_style_constructor(false), is_stateless(false),
- is_operator(false), is_method(false),
- has_args(false), has_kwargs(false), has_kwonly_args(false) { }
-
- /// Function name
- char *name = nullptr; /* why no C++ strings? They generate heavier code.. */
-
- // User-specified documentation string
- char *doc = nullptr;
-
- /// Human-readable version of the function signature
- char *signature = nullptr;
-
- /// List of registered keyword arguments
- std::vector args;
-
- /// Pointer to lambda function which converts arguments and performs the actual call
- handle (*impl) (function_call &) = nullptr;
-
- /// Storage for the wrapped function pointer and captured data, if any
- void *data[3] = { };
-
- /// Pointer to custom destructor for 'data' (if needed)
- void (*free_data) (function_record *ptr) = nullptr;
-
- /// Return value policy associated with this function
- return_value_policy policy = return_value_policy::automatic;
-
- /// True if name == '__init__'
- bool is_constructor : 1;
-
- /// True if this is a new-style `__init__` defined in `detail/init.h`
- bool is_new_style_constructor : 1;
-
- /// True if this is a stateless function pointer
- bool is_stateless : 1;
-
- /// True if this is an operator (__add__), etc.
- bool is_operator : 1;
-
- /// True if this is a method
- bool is_method : 1;
-
- /// True if the function has a '*args' argument
- bool has_args : 1;
-
- /// True if the function has a '**kwargs' argument
- bool has_kwargs : 1;
-
- /// True once a 'py::kwonly' is encountered (any following args are keyword-only)
- bool has_kwonly_args : 1;
-
- /// Number of arguments (including py::args and/or py::kwargs, if present)
- std::uint16_t nargs;
-
- /// Number of trailing arguments (counted in `nargs`) that are keyword-only
- std::uint16_t nargs_kwonly = 0;
-
- /// Python method object
- PyMethodDef *def = nullptr;
-
- /// Python handle to the parent scope (a class or a module)
- handle scope;
-
- /// Python handle to the sibling function representing an overload chain
- handle sibling;
-
- /// Pointer to next overload
- function_record *next = nullptr;
-};
-
-/// Special data structure which (temporarily) holds metadata about a bound class
-struct type_record {
- PYBIND11_NOINLINE type_record()
- : multiple_inheritance(false), dynamic_attr(false), buffer_protocol(false),
- default_holder(true), module_local(false), is_final(false) { }
-
- /// Handle to the parent scope
- handle scope;
-
- /// Name of the class
- const char *name = nullptr;
-
- // Pointer to RTTI type_info data structure
- const std::type_info *type = nullptr;
-
- /// How large is the underlying C++ type?
- size_t type_size = 0;
-
- /// What is the alignment of the underlying C++ type?
- size_t type_align = 0;
-
- /// How large is the type's holder?
- size_t holder_size = 0;
-
- /// The global operator new can be overridden with a class-specific variant
- void *(*operator_new)(size_t) = nullptr;
-
- /// Function pointer to class_<..>::init_instance
- void (*init_instance)(instance *, const void *) = nullptr;
-
- /// Function pointer to class_<..>::dealloc
- void (*dealloc)(detail::value_and_holder &) = nullptr;
-
- /// List of base classes of the newly created type
- list bases;
-
- /// Optional docstring
- const char *doc = nullptr;
-
- /// Custom metaclass (optional)
- handle metaclass;
-
- /// Multiple inheritance marker
- bool multiple_inheritance : 1;
-
- /// Does the class manage a __dict__?
- bool dynamic_attr : 1;
-
- /// Does the class implement the buffer protocol?
- bool buffer_protocol : 1;
-
- /// Is the default (unique_ptr) holder type used?
- bool default_holder : 1;
-
- /// Is the class definition local to the module shared object?
- bool module_local : 1;
-
- /// Is the class inheritable from python classes?
- bool is_final : 1;
-
- PYBIND11_NOINLINE void add_base(const std::type_info &base, void *(*caster)(void *)) {
- auto base_info = detail::get_type_info(base, false);
- if (!base_info) {
- std::string tname(base.name());
- detail::clean_type_id(tname);
- pybind11_fail("generic_type: type \"" + std::string(name) +
- "\" referenced unknown base type \"" + tname + "\"");
- }
-
- if (default_holder != base_info->default_holder) {
- std::string tname(base.name());
- detail::clean_type_id(tname);
- pybind11_fail("generic_type: type \"" + std::string(name) + "\" " +
- (default_holder ? "does not have" : "has") +
- " a non-default holder type while its base \"" + tname + "\" " +
- (base_info->default_holder ? "does not" : "does"));
- }
-
- bases.append((PyObject *) base_info->type);
-
- if (base_info->type->tp_dictoffset != 0)
- dynamic_attr = true;
-
- if (caster)
- base_info->implicit_casts.emplace_back(type, caster);
- }
-};
-
-inline function_call::function_call(const function_record &f, handle p) :
- func(f), parent(p) {
- args.reserve(f.nargs);
- args_convert.reserve(f.nargs);
-}
-
-/// Tag for a new-style `__init__` defined in `detail/init.h`
-struct is_new_style_constructor { };
-
-/**
- * Partial template specializations to process custom attributes provided to
- * cpp_function_ and class_. These are either used to initialize the respective
- * fields in the type_record and function_record data structures or executed at
- * runtime to deal with custom call policies (e.g. keep_alive).
- */
-template struct process_attribute;
-
-template struct process_attribute_default {
- /// Default implementation: do nothing
- static void init(const T &, function_record *) { }
- static void init(const T &, type_record *) { }
- static void precall(function_call &) { }
- static void postcall(function_call &, handle) { }
-};
-
-/// Process an attribute specifying the function's name
-template <> struct process_attribute : process_attribute_default {
- static void init(const name &n, function_record *r) { r->name = const_cast(n.value); }
-};
-
-/// Process an attribute specifying the function's docstring
-template <> struct process_attribute : process_attribute_default {
- static void init(const doc &n, function_record *r) { r->doc = const_cast(n.value); }
-};
-
-/// Process an attribute specifying the function's docstring (provided as a C-style string)
-template <> struct process_attribute : process_attribute_default {
- static void init(const char *d, function_record *r) { r->doc = const_cast(d); }
- static void init(const char *d, type_record *r) { r->doc = const_cast(d); }
-};
-template <> struct process_attribute : process_attribute { };
-
-/// Process an attribute indicating the function's return value policy
-template <> struct process_attribute : process_attribute_default {
- static void init(const return_value_policy &p, function_record *r) { r->policy = p; }
-};
-
-/// Process an attribute which indicates that this is an overloaded function associated with a given sibling
-template <> struct process_attribute : process_attribute_default {
- static void init(const sibling &s, function_record *r) { r->sibling = s.value; }
-};
-
-/// Process an attribute which indicates that this function is a method
-template <> struct process_attribute : process_attribute_default {
- static void init(const is_method &s, function_record *r) { r->is_method = true; r->scope = s.class_; }
-};
-
-/// Process an attribute which indicates the parent scope of a method
-template <> struct process_attribute : process_attribute_default {
- static void init(const scope &s, function_record *r) { r->scope = s.value; }
-};
-
-/// Process an attribute which indicates that this function is an operator
-template <> struct process_attribute : process_attribute_default {
- static void init(const is_operator &, function_record *r) { r->is_operator = true; }
-};
-
-template <> struct process_attribute : process_attribute_default {
- static void init(const is_new_style_constructor &, function_record *r) { r->is_new_style_constructor = true; }
-};
-
-inline void process_kwonly_arg(const arg &a, function_record *r) {
- if (!a.name || strlen(a.name) == 0)
- pybind11_fail("arg(): cannot specify an unnamed argument after an kwonly() annotation");
- ++r->nargs_kwonly;
-}
-
-/// Process a keyword argument attribute (*without* a default value)
-template <> struct process_attribute : process_attribute_default {
- static void init(const arg &a, function_record *r) {
- if (r->is_method && r->args.empty())
- r->args.emplace_back("self", nullptr, handle(), true /*convert*/, false /*none not allowed*/);
- r->args.emplace_back(a.name, nullptr, handle(), !a.flag_noconvert, a.flag_none);
-
- if (r->has_kwonly_args) process_kwonly_arg(a, r);
- }
-};
-
-/// Process a keyword argument attribute (*with* a default value)
-template <> struct process_attribute : process_attribute_default {
- static void init(const arg_v &a, function_record *r) {
- if (r->is_method && r->args.empty())
- r->args.emplace_back("self", nullptr /*descr*/, handle() /*parent*/, true /*convert*/, false /*none not allowed*/);
-
- if (!a.value) {
-#if !defined(NDEBUG)
- std::string descr("'");
- if (a.name) descr += std::string(a.name) + ": ";
- descr += a.type + "'";
- if (r->is_method) {
- if (r->name)
- descr += " in method '" + (std::string) str(r->scope) + "." + (std::string) r->name + "'";
- else
- descr += " in method of '" + (std::string) str(r->scope) + "'";
- } else if (r->name) {
- descr += " in function '" + (std::string) r->name + "'";
- }
- pybind11_fail("arg(): could not convert default argument "
- + descr + " into a Python object (type not registered yet?)");
-#else
- pybind11_fail("arg(): could not convert default argument "
- "into a Python object (type not registered yet?). "
- "Compile in debug mode for more information.");
-#endif
- }
- r->args.emplace_back(a.name, a.descr, a.value.inc_ref(), !a.flag_noconvert, a.flag_none);
-
- if (r->has_kwonly_args) process_kwonly_arg(a, r);
- }
-};
-
-/// Process a keyword-only-arguments-follow pseudo argument
-template <> struct process_attribute : process_attribute_default {
- static void init(const kwonly &, function_record *r) {
- r->has_kwonly_args = true;
- }
-};
-
-/// Process a parent class attribute. Single inheritance only (class_ itself already guarantees that)
-template
-struct process_attribute::value>> : process_attribute_default {
- static void init(const handle &h, type_record *r) { r->bases.append(h); }
-};
-
-/// Process a parent class attribute (deprecated, does not support multiple inheritance)
-template
-struct process_attribute> : process_attribute_default> {
- static void init(const base &, type_record *r) { r->add_base(typeid(T), nullptr); }
-};
-
-/// Process a multiple inheritance attribute
-template <>
-struct process_attribute : process_attribute_default {
- static void init(const multiple_inheritance &, type_record *r) { r->multiple_inheritance = true; }
-};
-
-template <>
-struct process_attribute : process_attribute_default {
- static void init(const dynamic_attr &, type_record *r) { r->dynamic_attr = true; }
-};
-
-template <>
-struct process_attribute : process_attribute_default {
- static void init(const is_final &, type_record *r) { r->is_final = true; }
-};
-
-template <>
-struct process_attribute : process_attribute_default {
- static void init(const buffer_protocol &, type_record *r) { r->buffer_protocol = true; }
-};
-
-template <>
-struct process_attribute : process_attribute_default {
- static void init(const metaclass &m, type_record *r) { r->metaclass = m.value; }
-};
-
-template <>
-struct process_attribute : process_attribute_default {
- static void init(const module_local &l, type_record *r) { r->module_local = l.value; }
-};
-
-/// Process an 'arithmetic' attribute for enums (does nothing here)
-template <>
-struct process_attribute : process_attribute_default {};
-
-template
-struct process_attribute> : process_attribute_default> { };
-
-/**
- * Process a keep_alive call policy -- invokes keep_alive_impl during the
- * pre-call handler if both Nurse, Patient != 0 and use the post-call handler
- * otherwise
- */
-template struct process_attribute> : public process_attribute_default> {
- template = 0>
- static void precall(function_call &call) { keep_alive_impl(Nurse, Patient, call, handle()); }
- template = 0>
- static void postcall(function_call &, handle) { }
- template = 0>
- static void precall(function_call &) { }
- template = 0>
- static void postcall(function_call &call, handle ret) { keep_alive_impl(Nurse, Patient, call, ret); }
-};
-
-/// Recursively iterate over variadic template arguments
-template struct process_attributes {
- static void init(const Args&... args, function_record *r) {
- int unused[] = { 0, (process_attribute::type>::init(args, r), 0) ... };
- ignore_unused(unused);
- }
- static void init(const Args&... args, type_record *r) {
- int unused[] = { 0, (process_attribute::type>::init(args, r), 0) ... };
- ignore_unused(unused);
- }
- static void precall(function_call &call) {
- int unused[] = { 0, (process_attribute::type>::precall(call), 0) ... };
- ignore_unused(unused);
- }
- static void postcall(function_call &call, handle fn_ret) {
- int unused[] = { 0, (process_attribute::type>::postcall(call, fn_ret), 0) ... };
- ignore_unused(unused);
- }
-};
-
-template
-using is_call_guard = is_instantiation;
-
-/// Extract the ``type`` from the first `call_guard` in `Extras...` (or `void_type` if none found)
-template
-using extract_guard_t = typename exactly_one_t, Extra...>::type;
-
-/// Check the number of named arguments at compile time
-template ::value...),
- size_t self = constexpr_sum(std::is_same::value...)>
-constexpr bool expected_num_args(size_t nargs, bool has_args, bool has_kwargs) {
- return named == 0 || (self + named + has_args + has_kwargs) == nargs;
-}
-
-PYBIND11_NAMESPACE_END(detail)
-PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE)
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/allocator/no_throw_allocator.h b/spaces/CVPR/LIVE/thrust/thrust/detail/allocator/no_throw_allocator.h
deleted file mode 100644
index ba8c3d852988e9add8659236293a424682701489..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/allocator/no_throw_allocator.h
+++ /dev/null
@@ -1,71 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-namespace thrust
-{
-namespace detail
-{
-
-template
- struct no_throw_allocator : BaseAllocator
-{
- private:
- typedef BaseAllocator super_t;
-
- public:
- inline __host__ __device__
- no_throw_allocator(const BaseAllocator &other = BaseAllocator())
- : super_t(other)
- {}
-
- template
- struct rebind
- {
- typedef no_throw_allocator::other> other;
- }; // end rebind
-
- __host__ __device__
- void deallocate(typename super_t::pointer p, typename super_t::size_type n)
- {
-#ifndef __CUDA_ARCH__
- try
- {
- super_t::deallocate(p, n);
- } // end try
- catch(...)
- {
- // catch anything
- } // end catch
-#else
- super_t::deallocate(p, n);
-#endif
- } // end deallocate()
-
- inline __host__ __device__
- bool operator==(no_throw_allocator const &other) { return super_t::operator==(other); }
-
- inline __host__ __device__
- bool operator!=(no_throw_allocator const &other) { return super_t::operator!=(other); }
-}; // end no_throw_allocator
-
-} // end detail
-} // end thrust
-
-
diff --git a/spaces/CVPR/WALT/mmdet/core/mask/__init__.py b/spaces/CVPR/WALT/mmdet/core/mask/__init__.py
deleted file mode 100644
index ab1e88bc686d5c2fe72b3114cb2b3e372e73a0f8..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/core/mask/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-from .mask_target import mask_target
-from .structures import BaseInstanceMasks, BitmapMasks, PolygonMasks
-from .utils import encode_mask_results, split_combined_polys
-
-__all__ = [
- 'split_combined_polys', 'mask_target', 'BaseInstanceMasks', 'BitmapMasks',
- 'PolygonMasks', 'encode_mask_results'
-]
diff --git a/spaces/CVPR/WALT/mmdet/core/mask/mask_target.py b/spaces/CVPR/WALT/mmdet/core/mask/mask_target.py
deleted file mode 100644
index 15d26a88bbf3710bd92813335918407db8c4e053..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/core/mask/mask_target.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import numpy as np
-import torch
-from torch.nn.modules.utils import _pair
-
-
-def mask_target(pos_proposals_list, pos_assigned_gt_inds_list, gt_masks_list,
- cfg):
- """Compute mask target for positive proposals in multiple images.
-
- Args:
- pos_proposals_list (list[Tensor]): Positive proposals in multiple
- images.
- pos_assigned_gt_inds_list (list[Tensor]): Assigned GT indices for each
- positive proposals.
- gt_masks_list (list[:obj:`BaseInstanceMasks`]): Ground truth masks of
- each image.
- cfg (dict): Config dict that specifies the mask size.
-
- Returns:
- list[Tensor]: Mask target of each image.
-
- Example:
- >>> import mmcv
- >>> import mmdet
- >>> from mmdet.core.mask import BitmapMasks
- >>> from mmdet.core.mask.mask_target import *
- >>> H, W = 17, 18
- >>> cfg = mmcv.Config({'mask_size': (13, 14)})
- >>> rng = np.random.RandomState(0)
- >>> # Positive proposals (tl_x, tl_y, br_x, br_y) for each image
- >>> pos_proposals_list = [
- >>> torch.Tensor([
- >>> [ 7.2425, 5.5929, 13.9414, 14.9541],
- >>> [ 7.3241, 3.6170, 16.3850, 15.3102],
- >>> ]),
- >>> torch.Tensor([
- >>> [ 4.8448, 6.4010, 7.0314, 9.7681],
- >>> [ 5.9790, 2.6989, 7.4416, 4.8580],
- >>> [ 0.0000, 0.0000, 0.1398, 9.8232],
- >>> ]),
- >>> ]
- >>> # Corresponding class index for each proposal for each image
- >>> pos_assigned_gt_inds_list = [
- >>> torch.LongTensor([7, 0]),
- >>> torch.LongTensor([5, 4, 1]),
- >>> ]
- >>> # Ground truth mask for each true object for each image
- >>> gt_masks_list = [
- >>> BitmapMasks(rng.rand(8, H, W), height=H, width=W),
- >>> BitmapMasks(rng.rand(6, H, W), height=H, width=W),
- >>> ]
- >>> mask_targets = mask_target(
- >>> pos_proposals_list, pos_assigned_gt_inds_list,
- >>> gt_masks_list, cfg)
- >>> assert mask_targets.shape == (5,) + cfg['mask_size']
- """
- cfg_list = [cfg for _ in range(len(pos_proposals_list))]
- mask_targets = map(mask_target_single, pos_proposals_list,
- pos_assigned_gt_inds_list, gt_masks_list, cfg_list)
- mask_targets = list(mask_targets)
- if len(mask_targets) > 0:
- mask_targets = torch.cat(mask_targets)
- return mask_targets
-
-
-def mask_target_single(pos_proposals, pos_assigned_gt_inds, gt_masks, cfg):
- """Compute mask target for each positive proposal in the image.
-
- Args:
- pos_proposals (Tensor): Positive proposals.
- pos_assigned_gt_inds (Tensor): Assigned GT inds of positive proposals.
- gt_masks (:obj:`BaseInstanceMasks`): GT masks in the format of Bitmap
- or Polygon.
- cfg (dict): Config dict that indicate the mask size.
-
- Returns:
- Tensor: Mask target of each positive proposals in the image.
-
- Example:
- >>> import mmcv
- >>> import mmdet
- >>> from mmdet.core.mask import BitmapMasks
- >>> from mmdet.core.mask.mask_target import * # NOQA
- >>> H, W = 32, 32
- >>> cfg = mmcv.Config({'mask_size': (7, 11)})
- >>> rng = np.random.RandomState(0)
- >>> # Masks for each ground truth box (relative to the image)
- >>> gt_masks_data = rng.rand(3, H, W)
- >>> gt_masks = BitmapMasks(gt_masks_data, height=H, width=W)
- >>> # Predicted positive boxes in one image
- >>> pos_proposals = torch.FloatTensor([
- >>> [ 16.2, 5.5, 19.9, 20.9],
- >>> [ 17.3, 13.6, 19.3, 19.3],
- >>> [ 14.8, 16.4, 17.0, 23.7],
- >>> [ 0.0, 0.0, 16.0, 16.0],
- >>> [ 4.0, 0.0, 20.0, 16.0],
- >>> ])
- >>> # For each predicted proposal, its assignment to a gt mask
- >>> pos_assigned_gt_inds = torch.LongTensor([0, 1, 2, 1, 1])
- >>> mask_targets = mask_target_single(
- >>> pos_proposals, pos_assigned_gt_inds, gt_masks, cfg)
- >>> assert mask_targets.shape == (5,) + cfg['mask_size']
- """
- device = pos_proposals.device
- mask_size = _pair(cfg.mask_size)
- num_pos = pos_proposals.size(0)
- if num_pos > 0:
- proposals_np = pos_proposals.cpu().numpy()
- maxh, maxw = gt_masks.height, gt_masks.width
- proposals_np[:, [0, 2]] = np.clip(proposals_np[:, [0, 2]], 0, maxw)
- proposals_np[:, [1, 3]] = np.clip(proposals_np[:, [1, 3]], 0, maxh)
- pos_assigned_gt_inds = pos_assigned_gt_inds.cpu().numpy()
-
- mask_targets = gt_masks.crop_and_resize(
- proposals_np, mask_size, device=device,
- inds=pos_assigned_gt_inds).to_ndarray()
-
- mask_targets = torch.from_numpy(mask_targets).float().to(device)
- else:
- mask_targets = pos_proposals.new_zeros((0, ) + mask_size)
-
- return mask_targets
diff --git a/spaces/CVPR/regionclip-demo/detectron2/data/datasets/cityscapes_panoptic.py b/spaces/CVPR/regionclip-demo/detectron2/data/datasets/cityscapes_panoptic.py
deleted file mode 100644
index 48c136f1623261b079591065fec7c7fc38165076..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/data/datasets/cityscapes_panoptic.py
+++ /dev/null
@@ -1,187 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import json
-import logging
-import os
-
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from detectron2.data.datasets.builtin_meta import CITYSCAPES_CATEGORIES
-from detectron2.utils.file_io import PathManager
-
-"""
-This file contains functions to register the Cityscapes panoptic dataset to the DatasetCatalog.
-"""
-
-
-logger = logging.getLogger(__name__)
-
-
-def get_cityscapes_panoptic_files(image_dir, gt_dir, json_info):
- files = []
- # scan through the directory
- cities = PathManager.ls(image_dir)
- logger.info(f"{len(cities)} cities found in '{image_dir}'.")
- image_dict = {}
- for city in cities:
- city_img_dir = os.path.join(image_dir, city)
- for basename in PathManager.ls(city_img_dir):
- image_file = os.path.join(city_img_dir, basename)
-
- suffix = "_leftImg8bit.png"
- assert basename.endswith(suffix), basename
- basename = os.path.basename(basename)[: -len(suffix)]
-
- image_dict[basename] = image_file
-
- for ann in json_info["annotations"]:
- image_file = image_dict.get(ann["image_id"], None)
- assert image_file is not None, "No image {} found for annotation {}".format(
- ann["image_id"], ann["file_name"]
- )
- label_file = os.path.join(gt_dir, ann["file_name"])
- segments_info = ann["segments_info"]
-
- files.append((image_file, label_file, segments_info))
-
- assert len(files), "No images found in {}".format(image_dir)
- assert PathManager.isfile(files[0][0]), files[0][0]
- assert PathManager.isfile(files[0][1]), files[0][1]
- return files
-
-
-def load_cityscapes_panoptic(image_dir, gt_dir, gt_json, meta):
- """
- Args:
- image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train".
- gt_dir (str): path to the raw annotations. e.g.,
- "~/cityscapes/gtFine/cityscapes_panoptic_train".
- gt_json (str): path to the json file. e.g.,
- "~/cityscapes/gtFine/cityscapes_panoptic_train.json".
- meta (dict): dictionary containing "thing_dataset_id_to_contiguous_id"
- and "stuff_dataset_id_to_contiguous_id" to map category ids to
- contiguous ids for training.
-
- Returns:
- list[dict]: a list of dicts in Detectron2 standard format. (See
- `Using Custom Datasets `_ )
- """
-
- def _convert_category_id(segment_info, meta):
- if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]:
- segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][
- segment_info["category_id"]
- ]
- else:
- segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][
- segment_info["category_id"]
- ]
- return segment_info
-
- assert os.path.exists(
- gt_json
- ), "Please run `python cityscapesscripts/preparation/createPanopticImgs.py` to generate label files." # noqa
- with open(gt_json) as f:
- json_info = json.load(f)
- files = get_cityscapes_panoptic_files(image_dir, gt_dir, json_info)
- ret = []
- for image_file, label_file, segments_info in files:
- sem_label_file = (
- image_file.replace("leftImg8bit", "gtFine").split(".")[0] + "_labelTrainIds.png"
- )
- segments_info = [_convert_category_id(x, meta) for x in segments_info]
- ret.append(
- {
- "file_name": image_file,
- "image_id": "_".join(
- os.path.splitext(os.path.basename(image_file))[0].split("_")[:3]
- ),
- "sem_seg_file_name": sem_label_file,
- "pan_seg_file_name": label_file,
- "segments_info": segments_info,
- }
- )
- assert len(ret), f"No images found in {image_dir}!"
- assert PathManager.isfile(
- ret[0]["sem_seg_file_name"]
- ), "Please generate labelTrainIds.png with cityscapesscripts/preparation/createTrainIdLabelImgs.py" # noqa
- assert PathManager.isfile(
- ret[0]["pan_seg_file_name"]
- ), "Please generate panoptic annotation with python cityscapesscripts/preparation/createPanopticImgs.py" # noqa
- return ret
-
-
-_RAW_CITYSCAPES_PANOPTIC_SPLITS = {
- "cityscapes_fine_panoptic_train": (
- "cityscapes/leftImg8bit/train",
- "cityscapes/gtFine/cityscapes_panoptic_train",
- "cityscapes/gtFine/cityscapes_panoptic_train.json",
- ),
- "cityscapes_fine_panoptic_val": (
- "cityscapes/leftImg8bit/val",
- "cityscapes/gtFine/cityscapes_panoptic_val",
- "cityscapes/gtFine/cityscapes_panoptic_val.json",
- ),
- # "cityscapes_fine_panoptic_test": not supported yet
-}
-
-
-def register_all_cityscapes_panoptic(root):
- meta = {}
- # The following metadata maps contiguous id from [0, #thing categories +
- # #stuff categories) to their names and colors. We have to replica of the
- # same name and color under "thing_*" and "stuff_*" because the current
- # visualization function in D2 handles thing and class classes differently
- # due to some heuristic used in Panoptic FPN. We keep the same naming to
- # enable reusing existing visualization functions.
- thing_classes = [k["name"] for k in CITYSCAPES_CATEGORIES]
- thing_colors = [k["color"] for k in CITYSCAPES_CATEGORIES]
- stuff_classes = [k["name"] for k in CITYSCAPES_CATEGORIES]
- stuff_colors = [k["color"] for k in CITYSCAPES_CATEGORIES]
-
- meta["thing_classes"] = thing_classes
- meta["thing_colors"] = thing_colors
- meta["stuff_classes"] = stuff_classes
- meta["stuff_colors"] = stuff_colors
-
- # There are three types of ids in cityscapes panoptic segmentation:
- # (1) category id: like semantic segmentation, it is the class id for each
- # pixel. Since there are some classes not used in evaluation, the category
- # id is not always contiguous and thus we have two set of category ids:
- # - original category id: category id in the original dataset, mainly
- # used for evaluation.
- # - contiguous category id: [0, #classes), in order to train the classifier
- # (2) instance id: this id is used to differentiate different instances from
- # the same category. For "stuff" classes, the instance id is always 0; for
- # "thing" classes, the instance id starts from 1 and 0 is reserved for
- # ignored instances (e.g. crowd annotation).
- # (3) panoptic id: this is the compact id that encode both category and
- # instance id by: category_id * 1000 + instance_id.
- thing_dataset_id_to_contiguous_id = {}
- stuff_dataset_id_to_contiguous_id = {}
-
- for k in CITYSCAPES_CATEGORIES:
- if k["isthing"] == 1:
- thing_dataset_id_to_contiguous_id[k["id"]] = k["trainId"]
- else:
- stuff_dataset_id_to_contiguous_id[k["id"]] = k["trainId"]
-
- meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id
- meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id
-
- for key, (image_dir, gt_dir, gt_json) in _RAW_CITYSCAPES_PANOPTIC_SPLITS.items():
- image_dir = os.path.join(root, image_dir)
- gt_dir = os.path.join(root, gt_dir)
- gt_json = os.path.join(root, gt_json)
-
- DatasetCatalog.register(
- key, lambda x=image_dir, y=gt_dir, z=gt_json: load_cityscapes_panoptic(x, y, z, meta)
- )
- MetadataCatalog.get(key).set(
- panoptic_root=gt_dir,
- image_root=image_dir,
- panoptic_json=gt_json,
- gt_dir=gt_dir.replace("cityscapes_panoptic_", ""),
- evaluator_type="cityscapes_panoptic_seg",
- ignore_label=255,
- label_divisor=1000,
- **meta,
- )
diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/utils/visualization.py b/spaces/Caoyunkang/Segment-Any-Anomaly/utils/visualization.py
deleted file mode 100644
index bfb5ef708f32ee33be209c10acf5f80c8ae950ab..0000000000000000000000000000000000000000
--- a/spaces/Caoyunkang/Segment-Any-Anomaly/utils/visualization.py
+++ /dev/null
@@ -1,131 +0,0 @@
-import cv2
-import matplotlib
-
-matplotlib.use("Agg")
-import matplotlib.pyplot as plt
-import numpy as np
-import os
-import seaborn as sns
-
-##
-from sklearn.manifold import TSNE
-from sklearn.decomposition import PCA
-
-##
-import matplotlib.ticker as mtick
-
-
-def plot_sample_cv2(names, imgs, scores_: dict, gts, save_folder=None):
- # get subplot number
- total_number = len(imgs)
-
- scores = scores_.copy()
- # normarlisze anomalies
- for k, v in scores.items():
- max_value = np.max(v)
- min_value = np.min(v)
-
- scores[k] = (scores[k] - min_value) / max_value * 255
- scores[k] = scores[k].astype(np.uint8)
- # draw gts
- mask_imgs = []
- for idx in range(total_number):
- gts_ = gts[idx]
- mask_imgs_ = imgs[idx].copy()
- mask_imgs_[gts_ > 0.5] = (0, 0, 255)
- mask_imgs.append(mask_imgs_)
-
- # save imgs
- for idx in range(total_number):
- cv2.imwrite(os.path.join(save_folder, f'{names[idx]}_ori.jpg'), imgs[idx])
- cv2.imwrite(os.path.join(save_folder, f'{names[idx]}_gt.jpg'), mask_imgs[idx])
-
- for key in scores:
- heat_map = cv2.applyColorMap(scores[key][idx], cv2.COLORMAP_JET)
- visz_map = cv2.addWeighted(heat_map, 0.5, imgs[idx], 0.5, 0)
- cv2.imwrite(os.path.join(save_folder, f'{names[idx]}_{key}.jpg'),
- visz_map)
-
-
-def plot_anomaly_score_distributions(scores: dict, ground_truths_list, save_folder, class_name):
- ground_truths = np.stack(ground_truths_list, axis=0)
-
- N_COUNT = 100000
-
- for k, v in scores.items():
- layer_score = np.stack(v, axis=0)
- normal_score = layer_score[ground_truths == 0]
- abnormal_score = layer_score[ground_truths != 0]
-
- plt.clf()
- plt.figure(figsize=(4, 3))
- ax = plt.gca()
- ax.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.2f'))
- ax.xaxis.set_major_formatter(mtick.FormatStrFormatter('%.2f'))
-
- # with plt.style.context(['science', 'ieee', 'no-latex']):
- sns.histplot(np.random.choice(normal_score, N_COUNT), color="green", bins=50, label='${d(p_n)}$',
- stat='probability', alpha=.75)
- sns.histplot(np.random.choice(abnormal_score, N_COUNT), color="red", bins=50, label='${d(p_a)}$',
- stat='probability', alpha=.75)
-
- plt.xlim([0, 3])
-
- save_path = os.path.join(save_folder, f'distributions_{class_name}_{k}.jpg')
-
- plt.savefig(save_path, bbox_inches='tight', dpi=300)
-
-
-valid_feature_visualization_methods = ['TSNE', 'PCA']
-
-
-def visualize_feature(features, labels, legends, n_components=3, method='TSNE'):
- assert method in valid_feature_visualization_methods
- assert n_components in [2, 3]
-
- if method == 'TSNE':
- model = TSNE(n_components=n_components)
- elif method == 'PCA':
- model = PCA(n_components=n_components)
-
- else:
- raise NotImplementedError
-
- feat_proj = model.fit_transform(features)
-
- if n_components == 2:
- ax = scatter_2d(feat_proj, labels)
- elif n_components == 3:
- ax = scatter_3d(feat_proj, labels)
- else:
- raise NotImplementedError
-
- plt.legend(legends)
- plt.axis('off')
-
-
-def scatter_3d(feat_proj, label):
- plt.clf()
- ax1 = plt.axes(projection='3d')
-
- label_unique = np.unique(label)
-
- for l in label_unique:
- ax1.scatter3D(feat_proj[label == l, 0],
- feat_proj[label == l, 1],
- feat_proj[label == l, 2], s=5)
-
- return ax1
-
-
-def scatter_2d(feat_proj, label):
- plt.clf()
- ax1 = plt.axes()
-
- label_unique = np.unique(label)
-
- for l in label_unique:
- ax1.scatter(feat_proj[label == l, 0],
- feat_proj[label == l, 1], s=5)
-
- return ax1
diff --git a/spaces/CourserLi/classify/app.py b/spaces/CourserLi/classify/app.py
deleted file mode 100644
index 1de0b5aafbca6124f796041de88f74a7ccc76e55..0000000000000000000000000000000000000000
--- a/spaces/CourserLi/classify/app.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# AUTOGENERATED! DO NOT EDIT! File to edit: app.ipynb.
-
-# %% auto 0
-__all__ = ['temp', 'learn', 'categories', 'image', 'label', 'examples', 'intf', 'is_cat', 'classify_image']
-
-# %% app.ipynb 1
-from fastai.vision.all import *
-import gradio as gr
-
-# %% app.ipynb 2
-def is_cat(x):
- return x[0].isupper()
-
-learn = load_learner('model.pkl')
-
-# %% app.ipynb 5
-categories = ('Dog', 'Cat')
-
-def classify_image(img):
- pred, idx, probs = learn.predict(img)
- return dict(zip(categories, map(float, probs)))
-
-# %% app.ipynb 8
-image = gr.Image(shape=(192, 192))
-label = gr.Label()
-examples = ['dog.png', 'cat.png']
-
-intf = gr.Interface(fn=classify_image,
- inputs=image,
- outputs=label,
- examples=examples)
-intf.launch(inline=False)
diff --git a/spaces/DEfiAnTH/SPSpace/Dockerfile b/spaces/DEfiAnTH/SPSpace/Dockerfile
deleted file mode 100644
index 29ec24bfb63cdbf2c92fc41c33e24b329aa6e1ca..0000000000000000000000000000000000000000
--- a/spaces/DEfiAnTH/SPSpace/Dockerfile
+++ /dev/null
@@ -1,65 +0,0 @@
-FROM zenmldocker/zenml-server:latest
-
-ENV ZENML_ANALYTICS_OPT_IN=true
-ENV ZENML_SERVER_DEPLOYMENT_TYPE="hf_spaces"
-ENV ZENML_LOGGING_VERBOSITY=DEBUG
-
-################################################################################
-#
-# CONFIGURING YOUR ZENML HF SPACES SERVER
-# ---------------------------------------
-# By default this space is not persistent. All ZenML metadata is stored in
-# localstorage in a SQLite database. If you would like to make your storage
-# persistent, use the appropriate environment variables below to configure the
-# image to use a MySQL-compatible database service that is reachable from the
-# container. See https://docs.zenml.io/getting-started/deploying-zenml/docker
-# for more information on how to configure these environment variables.
-
-# You can also configure the secrets store to use for your ZenML server. Be
-# sure to use Huggingface Spaces' 'Repository Secrets' feature to store any
-# secrets referenced here. See
-# https://huggingface.co/docs/hub/spaces-overview#managing-secrets for more
-# information on how to configure these environment variables.
-
-# ENV ZENML_DEFAULT_PROJECT_NAME=""
-# ENV ZENML_DEFAULT_USER_NAME=""
-# ENV ZENML_DEFAULT_USER_PASSWORD=""
-# ENV ZENML_STORE_URL=""
-# ENV ZENML_STORE_SSL_CA=""
-# ENV ZENML_STORE_SSL_CERT=""
-# ENV ZENML_STORE_SSL_KEY=""
-# ENV ZENML_STORE_SSL_VERIFY_SERVER_CERT=""
-
-# ENV ZENML_LOGGING_VERBOSITY=""
-
-# # SECRETS STORE CONFIGURATION
-# ENV ZENML_SECRETS_STORE_TYPE=""
-# ENV ZENML_SECRETS_STORE_ENCRYPTION_KEY=""
-# ENV ZENML_SECRETS_STORE_CLASS_PATH=""
-# ENV ZENML_JWT_SECRET_KEY=""
-
-# # AWS Secrets Store Configuration
-# ENV ZENML_SECRETS_STORE_REGION_NAME=""
-# ENV ZENML_SECRETS_STORE_AWS_ACCESS_KEY_ID=""
-# ENV ZENML_SECRETS_STORE_AWS_SECRET_ACCESS_KEY=""
-# ENV ZENML_SECRETS_STORE_AWS_SESSION_TOKEN=""
-# ENV ZENML_SECRETS_STORE_SECRET_LIST_REFRESH_TIMEOUT=""
-
-# # GCP Secrets Store Configuration
-# ENV ZENML_SECRETS_STORE_PROJECT_ID=""
-# ENV GOOGLE_APPLICATION_CREDENTIALS=""
-
-# # Azure Secrets Store Configuration
-# ENV ZENML_SECRETS_STORE_KEY_VAULT_NAME=""
-# ENV ZENML_SECRETS_STORE_AZURE_CLIENT_ID=""
-# ENV ZENML_SECRETS_STORE_AZURE_CLIENT_SECRET=""
-# ENV ZENML_SECRETS_STORE_AZURE_TENANT_ID=""
-
-# # Hashicorp Secrets Store Configuration
-# ENV ZENML_SECRETS_STORE_VAULT_ADDR=""
-# ENV ZENML_SECRETS_STORE_VAULT_TOKEN=""
-# ENV ZENML_SECRETS_STORE_VAULT_NAMESPACE=""
-# ENV ZENML_SECRETS_STORE_MAX_VERSIONS=""
-
-ENTRYPOINT ["uvicorn", "zenml.zen_server.zen_server_api:app", "--log-level", "debug"]
-CMD ["--proxy-headers", "--port", "8080", "--host", "0.0.0.0"]
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/psCharStrings.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/psCharStrings.py
deleted file mode 100644
index 0092a98ecee87286568b8593a4662a22235ee0e0..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/psCharStrings.py
+++ /dev/null
@@ -1,1479 +0,0 @@
-"""psCharStrings.py -- module implementing various kinds of CharStrings:
-CFF dictionary data and Type1/Type2 CharStrings.
-"""
-
-from fontTools.misc.fixedTools import (
- fixedToFloat,
- floatToFixed,
- floatToFixedToStr,
- strToFixedToFloat,
-)
-from fontTools.misc.textTools import bytechr, byteord, bytesjoin, strjoin
-from fontTools.pens.boundsPen import BoundsPen
-import struct
-import logging
-
-
-log = logging.getLogger(__name__)
-
-
-def read_operator(self, b0, data, index):
- if b0 == 12:
- op = (b0, byteord(data[index]))
- index = index + 1
- else:
- op = b0
- try:
- operator = self.operators[op]
- except KeyError:
- return None, index
- value = self.handle_operator(operator)
- return value, index
-
-
-def read_byte(self, b0, data, index):
- return b0 - 139, index
-
-
-def read_smallInt1(self, b0, data, index):
- b1 = byteord(data[index])
- return (b0 - 247) * 256 + b1 + 108, index + 1
-
-
-def read_smallInt2(self, b0, data, index):
- b1 = byteord(data[index])
- return -(b0 - 251) * 256 - b1 - 108, index + 1
-
-
-def read_shortInt(self, b0, data, index):
- (value,) = struct.unpack(">h", data[index : index + 2])
- return value, index + 2
-
-
-def read_longInt(self, b0, data, index):
- (value,) = struct.unpack(">l", data[index : index + 4])
- return value, index + 4
-
-
-def read_fixed1616(self, b0, data, index):
- (value,) = struct.unpack(">l", data[index : index + 4])
- return fixedToFloat(value, precisionBits=16), index + 4
-
-
-def read_reserved(self, b0, data, index):
- assert NotImplementedError
- return NotImplemented, index
-
-
-def read_realNumber(self, b0, data, index):
- number = ""
- while True:
- b = byteord(data[index])
- index = index + 1
- nibble0 = (b & 0xF0) >> 4
- nibble1 = b & 0x0F
- if nibble0 == 0xF:
- break
- number = number + realNibbles[nibble0]
- if nibble1 == 0xF:
- break
- number = number + realNibbles[nibble1]
- return float(number), index
-
-
-t1OperandEncoding = [None] * 256
-t1OperandEncoding[0:32] = (32) * [read_operator]
-t1OperandEncoding[32:247] = (247 - 32) * [read_byte]
-t1OperandEncoding[247:251] = (251 - 247) * [read_smallInt1]
-t1OperandEncoding[251:255] = (255 - 251) * [read_smallInt2]
-t1OperandEncoding[255] = read_longInt
-assert len(t1OperandEncoding) == 256
-
-t2OperandEncoding = t1OperandEncoding[:]
-t2OperandEncoding[28] = read_shortInt
-t2OperandEncoding[255] = read_fixed1616
-
-cffDictOperandEncoding = t2OperandEncoding[:]
-cffDictOperandEncoding[29] = read_longInt
-cffDictOperandEncoding[30] = read_realNumber
-cffDictOperandEncoding[255] = read_reserved
-
-
-realNibbles = [
- "0",
- "1",
- "2",
- "3",
- "4",
- "5",
- "6",
- "7",
- "8",
- "9",
- ".",
- "E",
- "E-",
- None,
- "-",
-]
-realNibblesDict = {v: i for i, v in enumerate(realNibbles)}
-
-maxOpStack = 193
-
-
-def buildOperatorDict(operatorList):
- oper = {}
- opc = {}
- for item in operatorList:
- if len(item) == 2:
- oper[item[0]] = item[1]
- else:
- oper[item[0]] = item[1:]
- if isinstance(item[0], tuple):
- opc[item[1]] = item[0]
- else:
- opc[item[1]] = (item[0],)
- return oper, opc
-
-
-t2Operators = [
- # opcode name
- (1, "hstem"),
- (3, "vstem"),
- (4, "vmoveto"),
- (5, "rlineto"),
- (6, "hlineto"),
- (7, "vlineto"),
- (8, "rrcurveto"),
- (10, "callsubr"),
- (11, "return"),
- (14, "endchar"),
- (15, "vsindex"),
- (16, "blend"),
- (18, "hstemhm"),
- (19, "hintmask"),
- (20, "cntrmask"),
- (21, "rmoveto"),
- (22, "hmoveto"),
- (23, "vstemhm"),
- (24, "rcurveline"),
- (25, "rlinecurve"),
- (26, "vvcurveto"),
- (27, "hhcurveto"),
- # (28, 'shortint'), # not really an operator
- (29, "callgsubr"),
- (30, "vhcurveto"),
- (31, "hvcurveto"),
- ((12, 0), "ignore"), # dotsection. Yes, there a few very early OTF/CFF
- # fonts with this deprecated operator. Just ignore it.
- ((12, 3), "and"),
- ((12, 4), "or"),
- ((12, 5), "not"),
- ((12, 8), "store"),
- ((12, 9), "abs"),
- ((12, 10), "add"),
- ((12, 11), "sub"),
- ((12, 12), "div"),
- ((12, 13), "load"),
- ((12, 14), "neg"),
- ((12, 15), "eq"),
- ((12, 18), "drop"),
- ((12, 20), "put"),
- ((12, 21), "get"),
- ((12, 22), "ifelse"),
- ((12, 23), "random"),
- ((12, 24), "mul"),
- ((12, 26), "sqrt"),
- ((12, 27), "dup"),
- ((12, 28), "exch"),
- ((12, 29), "index"),
- ((12, 30), "roll"),
- ((12, 34), "hflex"),
- ((12, 35), "flex"),
- ((12, 36), "hflex1"),
- ((12, 37), "flex1"),
-]
-
-
-def getIntEncoder(format):
- if format == "cff":
- twoByteOp = bytechr(28)
- fourByteOp = bytechr(29)
- elif format == "t1":
- twoByteOp = None
- fourByteOp = bytechr(255)
- else:
- assert format == "t2"
- twoByteOp = bytechr(28)
- fourByteOp = None
-
- def encodeInt(
- value,
- fourByteOp=fourByteOp,
- bytechr=bytechr,
- pack=struct.pack,
- unpack=struct.unpack,
- twoByteOp=twoByteOp,
- ):
- if -107 <= value <= 107:
- code = bytechr(value + 139)
- elif 108 <= value <= 1131:
- value = value - 108
- code = bytechr((value >> 8) + 247) + bytechr(value & 0xFF)
- elif -1131 <= value <= -108:
- value = -value - 108
- code = bytechr((value >> 8) + 251) + bytechr(value & 0xFF)
- elif twoByteOp is not None and -32768 <= value <= 32767:
- code = twoByteOp + pack(">h", value)
- elif fourByteOp is None:
- # Backwards compatible hack: due to a previous bug in FontTools,
- # 16.16 fixed numbers were written out as 4-byte ints. When
- # these numbers were small, they were wrongly written back as
- # small ints instead of 4-byte ints, breaking round-tripping.
- # This here workaround doesn't do it any better, since we can't
- # distinguish anymore between small ints that were supposed to
- # be small fixed numbers and small ints that were just small
- # ints. Hence the warning.
- log.warning(
- "4-byte T2 number got passed to the "
- "IntType handler. This should happen only when reading in "
- "old XML files.\n"
- )
- code = bytechr(255) + pack(">l", value)
- else:
- code = fourByteOp + pack(">l", value)
- return code
-
- return encodeInt
-
-
-encodeIntCFF = getIntEncoder("cff")
-encodeIntT1 = getIntEncoder("t1")
-encodeIntT2 = getIntEncoder("t2")
-
-
-def encodeFixed(f, pack=struct.pack):
- """For T2 only"""
- value = floatToFixed(f, precisionBits=16)
- if value & 0xFFFF == 0: # check if the fractional part is zero
- return encodeIntT2(value >> 16) # encode only the integer part
- else:
- return b"\xff" + pack(">l", value) # encode the entire fixed point value
-
-
-realZeroBytes = bytechr(30) + bytechr(0xF)
-
-
-def encodeFloat(f):
- # For CFF only, used in cffLib
- if f == 0.0: # 0.0 == +0.0 == -0.0
- return realZeroBytes
- # Note: 14 decimal digits seems to be the limitation for CFF real numbers
- # in macOS. However, we use 8 here to match the implementation of AFDKO.
- s = "%.8G" % f
- if s[:2] == "0.":
- s = s[1:]
- elif s[:3] == "-0.":
- s = "-" + s[2:]
- nibbles = []
- while s:
- c = s[0]
- s = s[1:]
- if c == "E":
- c2 = s[:1]
- if c2 == "-":
- s = s[1:]
- c = "E-"
- elif c2 == "+":
- s = s[1:]
- nibbles.append(realNibblesDict[c])
- nibbles.append(0xF)
- if len(nibbles) % 2:
- nibbles.append(0xF)
- d = bytechr(30)
- for i in range(0, len(nibbles), 2):
- d = d + bytechr(nibbles[i] << 4 | nibbles[i + 1])
- return d
-
-
-class CharStringCompileError(Exception):
- pass
-
-
-class SimpleT2Decompiler(object):
- def __init__(self, localSubrs, globalSubrs, private=None, blender=None):
- self.localSubrs = localSubrs
- self.localBias = calcSubrBias(localSubrs)
- self.globalSubrs = globalSubrs
- self.globalBias = calcSubrBias(globalSubrs)
- self.private = private
- self.blender = blender
- self.reset()
-
- def reset(self):
- self.callingStack = []
- self.operandStack = []
- self.hintCount = 0
- self.hintMaskBytes = 0
- self.numRegions = 0
- self.vsIndex = 0
-
- def execute(self, charString):
- self.callingStack.append(charString)
- needsDecompilation = charString.needsDecompilation()
- if needsDecompilation:
- program = []
- pushToProgram = program.append
- else:
- pushToProgram = lambda x: None
- pushToStack = self.operandStack.append
- index = 0
- while True:
- token, isOperator, index = charString.getToken(index)
- if token is None:
- break # we're done!
- pushToProgram(token)
- if isOperator:
- handlerName = "op_" + token
- handler = getattr(self, handlerName, None)
- if handler is not None:
- rv = handler(index)
- if rv:
- hintMaskBytes, index = rv
- pushToProgram(hintMaskBytes)
- else:
- self.popall()
- else:
- pushToStack(token)
- if needsDecompilation:
- charString.setProgram(program)
- del self.callingStack[-1]
-
- def pop(self):
- value = self.operandStack[-1]
- del self.operandStack[-1]
- return value
-
- def popall(self):
- stack = self.operandStack[:]
- self.operandStack[:] = []
- return stack
-
- def push(self, value):
- self.operandStack.append(value)
-
- def op_return(self, index):
- if self.operandStack:
- pass
-
- def op_endchar(self, index):
- pass
-
- def op_ignore(self, index):
- pass
-
- def op_callsubr(self, index):
- subrIndex = self.pop()
- subr = self.localSubrs[subrIndex + self.localBias]
- self.execute(subr)
-
- def op_callgsubr(self, index):
- subrIndex = self.pop()
- subr = self.globalSubrs[subrIndex + self.globalBias]
- self.execute(subr)
-
- def op_hstem(self, index):
- self.countHints()
-
- def op_vstem(self, index):
- self.countHints()
-
- def op_hstemhm(self, index):
- self.countHints()
-
- def op_vstemhm(self, index):
- self.countHints()
-
- def op_hintmask(self, index):
- if not self.hintMaskBytes:
- self.countHints()
- self.hintMaskBytes = (self.hintCount + 7) // 8
- hintMaskBytes, index = self.callingStack[-1].getBytes(index, self.hintMaskBytes)
- return hintMaskBytes, index
-
- op_cntrmask = op_hintmask
-
- def countHints(self):
- args = self.popall()
- self.hintCount = self.hintCount + len(args) // 2
-
- # misc
- def op_and(self, index):
- raise NotImplementedError
-
- def op_or(self, index):
- raise NotImplementedError
-
- def op_not(self, index):
- raise NotImplementedError
-
- def op_store(self, index):
- raise NotImplementedError
-
- def op_abs(self, index):
- raise NotImplementedError
-
- def op_add(self, index):
- raise NotImplementedError
-
- def op_sub(self, index):
- raise NotImplementedError
-
- def op_div(self, index):
- raise NotImplementedError
-
- def op_load(self, index):
- raise NotImplementedError
-
- def op_neg(self, index):
- raise NotImplementedError
-
- def op_eq(self, index):
- raise NotImplementedError
-
- def op_drop(self, index):
- raise NotImplementedError
-
- def op_put(self, index):
- raise NotImplementedError
-
- def op_get(self, index):
- raise NotImplementedError
-
- def op_ifelse(self, index):
- raise NotImplementedError
-
- def op_random(self, index):
- raise NotImplementedError
-
- def op_mul(self, index):
- raise NotImplementedError
-
- def op_sqrt(self, index):
- raise NotImplementedError
-
- def op_dup(self, index):
- raise NotImplementedError
-
- def op_exch(self, index):
- raise NotImplementedError
-
- def op_index(self, index):
- raise NotImplementedError
-
- def op_roll(self, index):
- raise NotImplementedError
-
- def op_blend(self, index):
- if self.numRegions == 0:
- self.numRegions = self.private.getNumRegions()
- numBlends = self.pop()
- numOps = numBlends * (self.numRegions + 1)
- if self.blender is None:
- del self.operandStack[
- -(numOps - numBlends) :
- ] # Leave the default operands on the stack.
- else:
- argi = len(self.operandStack) - numOps
- end_args = tuplei = argi + numBlends
- while argi < end_args:
- next_ti = tuplei + self.numRegions
- deltas = self.operandStack[tuplei:next_ti]
- delta = self.blender(self.vsIndex, deltas)
- self.operandStack[argi] += delta
- tuplei = next_ti
- argi += 1
- self.operandStack[end_args:] = []
-
- def op_vsindex(self, index):
- vi = self.pop()
- self.vsIndex = vi
- self.numRegions = self.private.getNumRegions(vi)
-
-
-t1Operators = [
- # opcode name
- (1, "hstem"),
- (3, "vstem"),
- (4, "vmoveto"),
- (5, "rlineto"),
- (6, "hlineto"),
- (7, "vlineto"),
- (8, "rrcurveto"),
- (9, "closepath"),
- (10, "callsubr"),
- (11, "return"),
- (13, "hsbw"),
- (14, "endchar"),
- (21, "rmoveto"),
- (22, "hmoveto"),
- (30, "vhcurveto"),
- (31, "hvcurveto"),
- ((12, 0), "dotsection"),
- ((12, 1), "vstem3"),
- ((12, 2), "hstem3"),
- ((12, 6), "seac"),
- ((12, 7), "sbw"),
- ((12, 12), "div"),
- ((12, 16), "callothersubr"),
- ((12, 17), "pop"),
- ((12, 33), "setcurrentpoint"),
-]
-
-
-class T2WidthExtractor(SimpleT2Decompiler):
- def __init__(
- self,
- localSubrs,
- globalSubrs,
- nominalWidthX,
- defaultWidthX,
- private=None,
- blender=None,
- ):
- SimpleT2Decompiler.__init__(self, localSubrs, globalSubrs, private, blender)
- self.nominalWidthX = nominalWidthX
- self.defaultWidthX = defaultWidthX
-
- def reset(self):
- SimpleT2Decompiler.reset(self)
- self.gotWidth = 0
- self.width = 0
-
- def popallWidth(self, evenOdd=0):
- args = self.popall()
- if not self.gotWidth:
- if evenOdd ^ (len(args) % 2):
- # For CFF2 charstrings, this should never happen
- assert (
- self.defaultWidthX is not None
- ), "CFF2 CharStrings must not have an initial width value"
- self.width = self.nominalWidthX + args[0]
- args = args[1:]
- else:
- self.width = self.defaultWidthX
- self.gotWidth = 1
- return args
-
- def countHints(self):
- args = self.popallWidth()
- self.hintCount = self.hintCount + len(args) // 2
-
- def op_rmoveto(self, index):
- self.popallWidth()
-
- def op_hmoveto(self, index):
- self.popallWidth(1)
-
- def op_vmoveto(self, index):
- self.popallWidth(1)
-
- def op_endchar(self, index):
- self.popallWidth()
-
-
-class T2OutlineExtractor(T2WidthExtractor):
- def __init__(
- self,
- pen,
- localSubrs,
- globalSubrs,
- nominalWidthX,
- defaultWidthX,
- private=None,
- blender=None,
- ):
- T2WidthExtractor.__init__(
- self,
- localSubrs,
- globalSubrs,
- nominalWidthX,
- defaultWidthX,
- private,
- blender,
- )
- self.pen = pen
- self.subrLevel = 0
-
- def reset(self):
- T2WidthExtractor.reset(self)
- self.currentPoint = (0, 0)
- self.sawMoveTo = 0
- self.subrLevel = 0
-
- def execute(self, charString):
- self.subrLevel += 1
- super().execute(charString)
- self.subrLevel -= 1
- if self.subrLevel == 0:
- self.endPath()
-
- def _nextPoint(self, point):
- x, y = self.currentPoint
- point = x + point[0], y + point[1]
- self.currentPoint = point
- return point
-
- def rMoveTo(self, point):
- self.pen.moveTo(self._nextPoint(point))
- self.sawMoveTo = 1
-
- def rLineTo(self, point):
- if not self.sawMoveTo:
- self.rMoveTo((0, 0))
- self.pen.lineTo(self._nextPoint(point))
-
- def rCurveTo(self, pt1, pt2, pt3):
- if not self.sawMoveTo:
- self.rMoveTo((0, 0))
- nextPoint = self._nextPoint
- self.pen.curveTo(nextPoint(pt1), nextPoint(pt2), nextPoint(pt3))
-
- def closePath(self):
- if self.sawMoveTo:
- self.pen.closePath()
- self.sawMoveTo = 0
-
- def endPath(self):
- # In T2 there are no open paths, so always do a closePath when
- # finishing a sub path. We avoid spurious calls to closePath()
- # because its a real T1 op we're emulating in T2 whereas
- # endPath() is just a means to that emulation
- if self.sawMoveTo:
- self.closePath()
-
- #
- # hint operators
- #
- # def op_hstem(self, index):
- # self.countHints()
- # def op_vstem(self, index):
- # self.countHints()
- # def op_hstemhm(self, index):
- # self.countHints()
- # def op_vstemhm(self, index):
- # self.countHints()
- # def op_hintmask(self, index):
- # self.countHints()
- # def op_cntrmask(self, index):
- # self.countHints()
-
- #
- # path constructors, moveto
- #
- def op_rmoveto(self, index):
- self.endPath()
- self.rMoveTo(self.popallWidth())
-
- def op_hmoveto(self, index):
- self.endPath()
- self.rMoveTo((self.popallWidth(1)[0], 0))
-
- def op_vmoveto(self, index):
- self.endPath()
- self.rMoveTo((0, self.popallWidth(1)[0]))
-
- def op_endchar(self, index):
- self.endPath()
- args = self.popallWidth()
- if args:
- from fontTools.encodings.StandardEncoding import StandardEncoding
-
- # endchar can do seac accent bulding; The T2 spec says it's deprecated,
- # but recent software that shall remain nameless does output it.
- adx, ady, bchar, achar = args
- baseGlyph = StandardEncoding[bchar]
- self.pen.addComponent(baseGlyph, (1, 0, 0, 1, 0, 0))
- accentGlyph = StandardEncoding[achar]
- self.pen.addComponent(accentGlyph, (1, 0, 0, 1, adx, ady))
-
- #
- # path constructors, lines
- #
- def op_rlineto(self, index):
- args = self.popall()
- for i in range(0, len(args), 2):
- point = args[i : i + 2]
- self.rLineTo(point)
-
- def op_hlineto(self, index):
- self.alternatingLineto(1)
-
- def op_vlineto(self, index):
- self.alternatingLineto(0)
-
- #
- # path constructors, curves
- #
- def op_rrcurveto(self, index):
- """{dxa dya dxb dyb dxc dyc}+ rrcurveto"""
- args = self.popall()
- for i in range(0, len(args), 6):
- (
- dxa,
- dya,
- dxb,
- dyb,
- dxc,
- dyc,
- ) = args[i : i + 6]
- self.rCurveTo((dxa, dya), (dxb, dyb), (dxc, dyc))
-
- def op_rcurveline(self, index):
- """{dxa dya dxb dyb dxc dyc}+ dxd dyd rcurveline"""
- args = self.popall()
- for i in range(0, len(args) - 2, 6):
- dxb, dyb, dxc, dyc, dxd, dyd = args[i : i + 6]
- self.rCurveTo((dxb, dyb), (dxc, dyc), (dxd, dyd))
- self.rLineTo(args[-2:])
-
- def op_rlinecurve(self, index):
- """{dxa dya}+ dxb dyb dxc dyc dxd dyd rlinecurve"""
- args = self.popall()
- lineArgs = args[:-6]
- for i in range(0, len(lineArgs), 2):
- self.rLineTo(lineArgs[i : i + 2])
- dxb, dyb, dxc, dyc, dxd, dyd = args[-6:]
- self.rCurveTo((dxb, dyb), (dxc, dyc), (dxd, dyd))
-
- def op_vvcurveto(self, index):
- "dx1? {dya dxb dyb dyc}+ vvcurveto"
- args = self.popall()
- if len(args) % 2:
- dx1 = args[0]
- args = args[1:]
- else:
- dx1 = 0
- for i in range(0, len(args), 4):
- dya, dxb, dyb, dyc = args[i : i + 4]
- self.rCurveTo((dx1, dya), (dxb, dyb), (0, dyc))
- dx1 = 0
-
- def op_hhcurveto(self, index):
- """dy1? {dxa dxb dyb dxc}+ hhcurveto"""
- args = self.popall()
- if len(args) % 2:
- dy1 = args[0]
- args = args[1:]
- else:
- dy1 = 0
- for i in range(0, len(args), 4):
- dxa, dxb, dyb, dxc = args[i : i + 4]
- self.rCurveTo((dxa, dy1), (dxb, dyb), (dxc, 0))
- dy1 = 0
-
- def op_vhcurveto(self, index):
- """dy1 dx2 dy2 dx3 {dxa dxb dyb dyc dyd dxe dye dxf}* dyf? vhcurveto (30)
- {dya dxb dyb dxc dxd dxe dye dyf}+ dxf? vhcurveto
- """
- args = self.popall()
- while args:
- args = self.vcurveto(args)
- if args:
- args = self.hcurveto(args)
-
- def op_hvcurveto(self, index):
- """dx1 dx2 dy2 dy3 {dya dxb dyb dxc dxd dxe dye dyf}* dxf?
- {dxa dxb dyb dyc dyd dxe dye dxf}+ dyf?
- """
- args = self.popall()
- while args:
- args = self.hcurveto(args)
- if args:
- args = self.vcurveto(args)
-
- #
- # path constructors, flex
- #
- def op_hflex(self, index):
- dx1, dx2, dy2, dx3, dx4, dx5, dx6 = self.popall()
- dy1 = dy3 = dy4 = dy6 = 0
- dy5 = -dy2
- self.rCurveTo((dx1, dy1), (dx2, dy2), (dx3, dy3))
- self.rCurveTo((dx4, dy4), (dx5, dy5), (dx6, dy6))
-
- def op_flex(self, index):
- dx1, dy1, dx2, dy2, dx3, dy3, dx4, dy4, dx5, dy5, dx6, dy6, fd = self.popall()
- self.rCurveTo((dx1, dy1), (dx2, dy2), (dx3, dy3))
- self.rCurveTo((dx4, dy4), (dx5, dy5), (dx6, dy6))
-
- def op_hflex1(self, index):
- dx1, dy1, dx2, dy2, dx3, dx4, dx5, dy5, dx6 = self.popall()
- dy3 = dy4 = 0
- dy6 = -(dy1 + dy2 + dy3 + dy4 + dy5)
-
- self.rCurveTo((dx1, dy1), (dx2, dy2), (dx3, dy3))
- self.rCurveTo((dx4, dy4), (dx5, dy5), (dx6, dy6))
-
- def op_flex1(self, index):
- dx1, dy1, dx2, dy2, dx3, dy3, dx4, dy4, dx5, dy5, d6 = self.popall()
- dx = dx1 + dx2 + dx3 + dx4 + dx5
- dy = dy1 + dy2 + dy3 + dy4 + dy5
- if abs(dx) > abs(dy):
- dx6 = d6
- dy6 = -dy
- else:
- dx6 = -dx
- dy6 = d6
- self.rCurveTo((dx1, dy1), (dx2, dy2), (dx3, dy3))
- self.rCurveTo((dx4, dy4), (dx5, dy5), (dx6, dy6))
-
- # misc
- def op_and(self, index):
- raise NotImplementedError
-
- def op_or(self, index):
- raise NotImplementedError
-
- def op_not(self, index):
- raise NotImplementedError
-
- def op_store(self, index):
- raise NotImplementedError
-
- def op_abs(self, index):
- raise NotImplementedError
-
- def op_add(self, index):
- raise NotImplementedError
-
- def op_sub(self, index):
- raise NotImplementedError
-
- def op_div(self, index):
- num2 = self.pop()
- num1 = self.pop()
- d1 = num1 // num2
- d2 = num1 / num2
- if d1 == d2:
- self.push(d1)
- else:
- self.push(d2)
-
- def op_load(self, index):
- raise NotImplementedError
-
- def op_neg(self, index):
- raise NotImplementedError
-
- def op_eq(self, index):
- raise NotImplementedError
-
- def op_drop(self, index):
- raise NotImplementedError
-
- def op_put(self, index):
- raise NotImplementedError
-
- def op_get(self, index):
- raise NotImplementedError
-
- def op_ifelse(self, index):
- raise NotImplementedError
-
- def op_random(self, index):
- raise NotImplementedError
-
- def op_mul(self, index):
- raise NotImplementedError
-
- def op_sqrt(self, index):
- raise NotImplementedError
-
- def op_dup(self, index):
- raise NotImplementedError
-
- def op_exch(self, index):
- raise NotImplementedError
-
- def op_index(self, index):
- raise NotImplementedError
-
- def op_roll(self, index):
- raise NotImplementedError
-
- #
- # miscellaneous helpers
- #
- def alternatingLineto(self, isHorizontal):
- args = self.popall()
- for arg in args:
- if isHorizontal:
- point = (arg, 0)
- else:
- point = (0, arg)
- self.rLineTo(point)
- isHorizontal = not isHorizontal
-
- def vcurveto(self, args):
- dya, dxb, dyb, dxc = args[:4]
- args = args[4:]
- if len(args) == 1:
- dyc = args[0]
- args = []
- else:
- dyc = 0
- self.rCurveTo((0, dya), (dxb, dyb), (dxc, dyc))
- return args
-
- def hcurveto(self, args):
- dxa, dxb, dyb, dyc = args[:4]
- args = args[4:]
- if len(args) == 1:
- dxc = args[0]
- args = []
- else:
- dxc = 0
- self.rCurveTo((dxa, 0), (dxb, dyb), (dxc, dyc))
- return args
-
-
-class T1OutlineExtractor(T2OutlineExtractor):
- def __init__(self, pen, subrs):
- self.pen = pen
- self.subrs = subrs
- self.reset()
-
- def reset(self):
- self.flexing = 0
- self.width = 0
- self.sbx = 0
- T2OutlineExtractor.reset(self)
-
- def endPath(self):
- if self.sawMoveTo:
- self.pen.endPath()
- self.sawMoveTo = 0
-
- def popallWidth(self, evenOdd=0):
- return self.popall()
-
- def exch(self):
- stack = self.operandStack
- stack[-1], stack[-2] = stack[-2], stack[-1]
-
- #
- # path constructors
- #
- def op_rmoveto(self, index):
- if self.flexing:
- return
- self.endPath()
- self.rMoveTo(self.popall())
-
- def op_hmoveto(self, index):
- if self.flexing:
- # We must add a parameter to the stack if we are flexing
- self.push(0)
- return
- self.endPath()
- self.rMoveTo((self.popall()[0], 0))
-
- def op_vmoveto(self, index):
- if self.flexing:
- # We must add a parameter to the stack if we are flexing
- self.push(0)
- self.exch()
- return
- self.endPath()
- self.rMoveTo((0, self.popall()[0]))
-
- def op_closepath(self, index):
- self.closePath()
-
- def op_setcurrentpoint(self, index):
- args = self.popall()
- x, y = args
- self.currentPoint = x, y
-
- def op_endchar(self, index):
- self.endPath()
-
- def op_hsbw(self, index):
- sbx, wx = self.popall()
- self.width = wx
- self.sbx = sbx
- self.currentPoint = sbx, self.currentPoint[1]
-
- def op_sbw(self, index):
- self.popall() # XXX
-
- #
- def op_callsubr(self, index):
- subrIndex = self.pop()
- subr = self.subrs[subrIndex]
- self.execute(subr)
-
- def op_callothersubr(self, index):
- subrIndex = self.pop()
- nArgs = self.pop()
- # print nArgs, subrIndex, "callothersubr"
- if subrIndex == 0 and nArgs == 3:
- self.doFlex()
- self.flexing = 0
- elif subrIndex == 1 and nArgs == 0:
- self.flexing = 1
- # ignore...
-
- def op_pop(self, index):
- pass # ignore...
-
- def doFlex(self):
- finaly = self.pop()
- finalx = self.pop()
- self.pop() # flex height is unused
-
- p3y = self.pop()
- p3x = self.pop()
- bcp4y = self.pop()
- bcp4x = self.pop()
- bcp3y = self.pop()
- bcp3x = self.pop()
- p2y = self.pop()
- p2x = self.pop()
- bcp2y = self.pop()
- bcp2x = self.pop()
- bcp1y = self.pop()
- bcp1x = self.pop()
- rpy = self.pop()
- rpx = self.pop()
-
- # call rrcurveto
- self.push(bcp1x + rpx)
- self.push(bcp1y + rpy)
- self.push(bcp2x)
- self.push(bcp2y)
- self.push(p2x)
- self.push(p2y)
- self.op_rrcurveto(None)
-
- # call rrcurveto
- self.push(bcp3x)
- self.push(bcp3y)
- self.push(bcp4x)
- self.push(bcp4y)
- self.push(p3x)
- self.push(p3y)
- self.op_rrcurveto(None)
-
- # Push back final coords so subr 0 can find them
- self.push(finalx)
- self.push(finaly)
-
- def op_dotsection(self, index):
- self.popall() # XXX
-
- def op_hstem3(self, index):
- self.popall() # XXX
-
- def op_seac(self, index):
- "asb adx ady bchar achar seac"
- from fontTools.encodings.StandardEncoding import StandardEncoding
-
- asb, adx, ady, bchar, achar = self.popall()
- baseGlyph = StandardEncoding[bchar]
- self.pen.addComponent(baseGlyph, (1, 0, 0, 1, 0, 0))
- accentGlyph = StandardEncoding[achar]
- adx = adx + self.sbx - asb # seac weirdness
- self.pen.addComponent(accentGlyph, (1, 0, 0, 1, adx, ady))
-
- def op_vstem3(self, index):
- self.popall() # XXX
-
-
-class T2CharString(object):
-
- operandEncoding = t2OperandEncoding
- operators, opcodes = buildOperatorDict(t2Operators)
- decompilerClass = SimpleT2Decompiler
- outlineExtractor = T2OutlineExtractor
-
- def __init__(self, bytecode=None, program=None, private=None, globalSubrs=None):
- if program is None:
- program = []
- self.bytecode = bytecode
- self.program = program
- self.private = private
- self.globalSubrs = globalSubrs if globalSubrs is not None else []
- self._cur_vsindex = None
-
- def getNumRegions(self, vsindex=None):
- pd = self.private
- assert pd is not None
- if vsindex is not None:
- self._cur_vsindex = vsindex
- elif self._cur_vsindex is None:
- self._cur_vsindex = pd.vsindex if hasattr(pd, "vsindex") else 0
- return pd.getNumRegions(self._cur_vsindex)
-
- def __repr__(self):
- if self.bytecode is None:
- return "<%s (source) at %x>" % (self.__class__.__name__, id(self))
- else:
- return "<%s (bytecode) at %x>" % (self.__class__.__name__, id(self))
-
- def getIntEncoder(self):
- return encodeIntT2
-
- def getFixedEncoder(self):
- return encodeFixed
-
- def decompile(self):
- if not self.needsDecompilation():
- return
- subrs = getattr(self.private, "Subrs", [])
- decompiler = self.decompilerClass(subrs, self.globalSubrs, self.private)
- decompiler.execute(self)
-
- def draw(self, pen, blender=None):
- subrs = getattr(self.private, "Subrs", [])
- extractor = self.outlineExtractor(
- pen,
- subrs,
- self.globalSubrs,
- self.private.nominalWidthX,
- self.private.defaultWidthX,
- self.private,
- blender,
- )
- extractor.execute(self)
- self.width = extractor.width
-
- def calcBounds(self, glyphSet):
- boundsPen = BoundsPen(glyphSet)
- self.draw(boundsPen)
- return boundsPen.bounds
-
- def compile(self, isCFF2=False):
- if self.bytecode is not None:
- return
- opcodes = self.opcodes
- program = self.program
-
- if isCFF2:
- # If present, remove return and endchar operators.
- if program and program[-1] in ("return", "endchar"):
- program = program[:-1]
- elif program and not isinstance(program[-1], str):
- raise CharStringCompileError(
- "T2CharString or Subr has items on the stack after last operator."
- )
-
- bytecode = []
- encodeInt = self.getIntEncoder()
- encodeFixed = self.getFixedEncoder()
- i = 0
- end = len(program)
- while i < end:
- token = program[i]
- i = i + 1
- if isinstance(token, str):
- try:
- bytecode.extend(bytechr(b) for b in opcodes[token])
- except KeyError:
- raise CharStringCompileError("illegal operator: %s" % token)
- if token in ("hintmask", "cntrmask"):
- bytecode.append(program[i]) # hint mask
- i = i + 1
- elif isinstance(token, int):
- bytecode.append(encodeInt(token))
- elif isinstance(token, float):
- bytecode.append(encodeFixed(token))
- else:
- assert 0, "unsupported type: %s" % type(token)
- try:
- bytecode = bytesjoin(bytecode)
- except TypeError:
- log.error(bytecode)
- raise
- self.setBytecode(bytecode)
-
- def needsDecompilation(self):
- return self.bytecode is not None
-
- def setProgram(self, program):
- self.program = program
- self.bytecode = None
-
- def setBytecode(self, bytecode):
- self.bytecode = bytecode
- self.program = None
-
- def getToken(self, index, len=len, byteord=byteord, isinstance=isinstance):
- if self.bytecode is not None:
- if index >= len(self.bytecode):
- return None, 0, 0
- b0 = byteord(self.bytecode[index])
- index = index + 1
- handler = self.operandEncoding[b0]
- token, index = handler(self, b0, self.bytecode, index)
- else:
- if index >= len(self.program):
- return None, 0, 0
- token = self.program[index]
- index = index + 1
- isOperator = isinstance(token, str)
- return token, isOperator, index
-
- def getBytes(self, index, nBytes):
- if self.bytecode is not None:
- newIndex = index + nBytes
- bytes = self.bytecode[index:newIndex]
- index = newIndex
- else:
- bytes = self.program[index]
- index = index + 1
- assert len(bytes) == nBytes
- return bytes, index
-
- def handle_operator(self, operator):
- return operator
-
- def toXML(self, xmlWriter, ttFont=None):
- from fontTools.misc.textTools import num2binary
-
- if self.bytecode is not None:
- xmlWriter.dumphex(self.bytecode)
- else:
- index = 0
- args = []
- while True:
- token, isOperator, index = self.getToken(index)
- if token is None:
- break
- if isOperator:
- if token in ("hintmask", "cntrmask"):
- hintMask, isOperator, index = self.getToken(index)
- bits = []
- for byte in hintMask:
- bits.append(num2binary(byteord(byte), 8))
- hintMask = strjoin(bits)
- line = " ".join(args + [token, hintMask])
- else:
- line = " ".join(args + [token])
- xmlWriter.write(line)
- xmlWriter.newline()
- args = []
- else:
- if isinstance(token, float):
- token = floatToFixedToStr(token, precisionBits=16)
- else:
- token = str(token)
- args.append(token)
- if args:
- # NOTE: only CFF2 charstrings/subrs can have numeric arguments on
- # the stack after the last operator. Compiling this would fail if
- # this is part of CFF 1.0 table.
- line = " ".join(args)
- xmlWriter.write(line)
-
- def fromXML(self, name, attrs, content):
- from fontTools.misc.textTools import binary2num, readHex
-
- if attrs.get("raw"):
- self.setBytecode(readHex(content))
- return
- content = strjoin(content)
- content = content.split()
- program = []
- end = len(content)
- i = 0
- while i < end:
- token = content[i]
- i = i + 1
- try:
- token = int(token)
- except ValueError:
- try:
- token = strToFixedToFloat(token, precisionBits=16)
- except ValueError:
- program.append(token)
- if token in ("hintmask", "cntrmask"):
- mask = content[i]
- maskBytes = b""
- for j in range(0, len(mask), 8):
- maskBytes = maskBytes + bytechr(binary2num(mask[j : j + 8]))
- program.append(maskBytes)
- i = i + 1
- else:
- program.append(token)
- else:
- program.append(token)
- self.setProgram(program)
-
-
-class T1CharString(T2CharString):
-
- operandEncoding = t1OperandEncoding
- operators, opcodes = buildOperatorDict(t1Operators)
-
- def __init__(self, bytecode=None, program=None, subrs=None):
- super().__init__(bytecode, program)
- self.subrs = subrs
-
- def getIntEncoder(self):
- return encodeIntT1
-
- def getFixedEncoder(self):
- def encodeFixed(value):
- raise TypeError("Type 1 charstrings don't support floating point operands")
-
- def decompile(self):
- if self.bytecode is None:
- return
- program = []
- index = 0
- while True:
- token, isOperator, index = self.getToken(index)
- if token is None:
- break
- program.append(token)
- self.setProgram(program)
-
- def draw(self, pen):
- extractor = T1OutlineExtractor(pen, self.subrs)
- extractor.execute(self)
- self.width = extractor.width
-
-
-class DictDecompiler(object):
-
- operandEncoding = cffDictOperandEncoding
-
- def __init__(self, strings, parent=None):
- self.stack = []
- self.strings = strings
- self.dict = {}
- self.parent = parent
-
- def getDict(self):
- assert len(self.stack) == 0, "non-empty stack"
- return self.dict
-
- def decompile(self, data):
- index = 0
- lenData = len(data)
- push = self.stack.append
- while index < lenData:
- b0 = byteord(data[index])
- index = index + 1
- handler = self.operandEncoding[b0]
- value, index = handler(self, b0, data, index)
- if value is not None:
- push(value)
-
- def pop(self):
- value = self.stack[-1]
- del self.stack[-1]
- return value
-
- def popall(self):
- args = self.stack[:]
- del self.stack[:]
- return args
-
- def handle_operator(self, operator):
- operator, argType = operator
- if isinstance(argType, tuple):
- value = ()
- for i in range(len(argType) - 1, -1, -1):
- arg = argType[i]
- arghandler = getattr(self, "arg_" + arg)
- value = (arghandler(operator),) + value
- else:
- arghandler = getattr(self, "arg_" + argType)
- value = arghandler(operator)
- if operator == "blend":
- self.stack.extend(value)
- else:
- self.dict[operator] = value
-
- def arg_number(self, name):
- if isinstance(self.stack[0], list):
- out = self.arg_blend_number(self.stack)
- else:
- out = self.pop()
- return out
-
- def arg_blend_number(self, name):
- out = []
- blendArgs = self.pop()
- numMasters = len(blendArgs)
- out.append(blendArgs)
- out.append("blend")
- dummy = self.popall()
- return blendArgs
-
- def arg_SID(self, name):
- return self.strings[self.pop()]
-
- def arg_array(self, name):
- return self.popall()
-
- def arg_blendList(self, name):
- """
- There may be non-blend args at the top of the stack. We first calculate
- where the blend args start in the stack. These are the last
- numMasters*numBlends) +1 args.
- The blend args starts with numMasters relative coordinate values, the BlueValues in the list from the default master font. This is followed by
- numBlends list of values. Each of value in one of these lists is the
- Variable Font delta for the matching region.
-
- We re-arrange this to be a list of numMaster entries. Each entry starts with the corresponding default font relative value, and is followed by
- the delta values. We then convert the default values, the first item in each entry, to an absolute value.
- """
- vsindex = self.dict.get("vsindex", 0)
- numMasters = (
- self.parent.getNumRegions(vsindex) + 1
- ) # only a PrivateDict has blended ops.
- numBlends = self.pop()
- args = self.popall()
- numArgs = len(args)
- # The spec says that there should be no non-blended Blue Values,.
- assert numArgs == numMasters * numBlends
- value = [None] * numBlends
- numDeltas = numMasters - 1
- i = 0
- prevVal = 0
- while i < numBlends:
- newVal = args[i] + prevVal
- prevVal = newVal
- masterOffset = numBlends + (i * numDeltas)
- blendList = [newVal] + args[masterOffset : masterOffset + numDeltas]
- value[i] = blendList
- i += 1
- return value
-
- def arg_delta(self, name):
- valueList = self.popall()
- out = []
- if valueList and isinstance(valueList[0], list):
- # arg_blendList() has already converted these to absolute values.
- out = valueList
- else:
- current = 0
- for v in valueList:
- current = current + v
- out.append(current)
- return out
-
-
-def calcSubrBias(subrs):
- nSubrs = len(subrs)
- if nSubrs < 1240:
- bias = 107
- elif nSubrs < 33900:
- bias = 1131
- else:
- bias = 32768
- return bias
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/cu2quPen.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/cu2quPen.py
deleted file mode 100644
index f182aed44a0e8a6dfd906c385f10a5f3a14c332e..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/cu2quPen.py
+++ /dev/null
@@ -1,325 +0,0 @@
-# Copyright 2016 Google Inc. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import operator
-from fontTools.cu2qu import curve_to_quadratic, curves_to_quadratic
-from fontTools.pens.basePen import decomposeSuperBezierSegment
-from fontTools.pens.filterPen import FilterPen
-from fontTools.pens.reverseContourPen import ReverseContourPen
-from fontTools.pens.pointPen import BasePointToSegmentPen
-from fontTools.pens.pointPen import ReverseContourPointPen
-
-
-class Cu2QuPen(FilterPen):
- """A filter pen to convert cubic bezier curves to quadratic b-splines
- using the FontTools SegmentPen protocol.
-
- Args:
-
- other_pen: another SegmentPen used to draw the transformed outline.
- max_err: maximum approximation error in font units. For optimal results,
- if you know the UPEM of the font, we recommend setting this to a
- value equal, or close to UPEM / 1000.
- reverse_direction: flip the contours' direction but keep starting point.
- stats: a dictionary counting the point numbers of quadratic segments.
- all_quadratic: if True (default), only quadratic b-splines are generated.
- if False, quadratic curves or cubic curves are generated depending
- on which one is more economical.
- """
-
- def __init__(
- self,
- other_pen,
- max_err,
- reverse_direction=False,
- stats=None,
- all_quadratic=True,
- ):
- if reverse_direction:
- other_pen = ReverseContourPen(other_pen)
- super().__init__(other_pen)
- self.max_err = max_err
- self.stats = stats
- self.all_quadratic = all_quadratic
-
- def _convert_curve(self, pt1, pt2, pt3):
- curve = (self.current_pt, pt1, pt2, pt3)
- result = curve_to_quadratic(curve, self.max_err, self.all_quadratic)
- if self.stats is not None:
- n = str(len(result) - 2)
- self.stats[n] = self.stats.get(n, 0) + 1
- if self.all_quadratic:
- self.qCurveTo(*result[1:])
- else:
- if len(result) == 3:
- self.qCurveTo(*result[1:])
- else:
- assert len(result) == 4
- super().curveTo(*result[1:])
-
- def curveTo(self, *points):
- n = len(points)
- if n == 3:
- # this is the most common case, so we special-case it
- self._convert_curve(*points)
- elif n > 3:
- for segment in decomposeSuperBezierSegment(points):
- self._convert_curve(*segment)
- else:
- self.qCurveTo(*points)
-
-
-class Cu2QuPointPen(BasePointToSegmentPen):
- """A filter pen to convert cubic bezier curves to quadratic b-splines
- using the FontTools PointPen protocol.
-
- Args:
- other_point_pen: another PointPen used to draw the transformed outline.
- max_err: maximum approximation error in font units. For optimal results,
- if you know the UPEM of the font, we recommend setting this to a
- value equal, or close to UPEM / 1000.
- reverse_direction: reverse the winding direction of all contours.
- stats: a dictionary counting the point numbers of quadratic segments.
- all_quadratic: if True (default), only quadratic b-splines are generated.
- if False, quadratic curves or cubic curves are generated depending
- on which one is more economical.
- """
-
- __points_required = {
- "move": (1, operator.eq),
- "line": (1, operator.eq),
- "qcurve": (2, operator.ge),
- "curve": (3, operator.eq),
- }
-
- def __init__(
- self,
- other_point_pen,
- max_err,
- reverse_direction=False,
- stats=None,
- all_quadratic=True,
- ):
- BasePointToSegmentPen.__init__(self)
- if reverse_direction:
- self.pen = ReverseContourPointPen(other_point_pen)
- else:
- self.pen = other_point_pen
- self.max_err = max_err
- self.stats = stats
- self.all_quadratic = all_quadratic
-
- def _flushContour(self, segments):
- assert len(segments) >= 1
- closed = segments[0][0] != "move"
- new_segments = []
- prev_points = segments[-1][1]
- prev_on_curve = prev_points[-1][0]
- for segment_type, points in segments:
- if segment_type == "curve":
- for sub_points in self._split_super_bezier_segments(points):
- on_curve, smooth, name, kwargs = sub_points[-1]
- bcp1, bcp2 = sub_points[0][0], sub_points[1][0]
- cubic = [prev_on_curve, bcp1, bcp2, on_curve]
- quad = curve_to_quadratic(cubic, self.max_err, self.all_quadratic)
- if self.stats is not None:
- n = str(len(quad) - 2)
- self.stats[n] = self.stats.get(n, 0) + 1
- new_points = [(pt, False, None, {}) for pt in quad[1:-1]]
- new_points.append((on_curve, smooth, name, kwargs))
- if self.all_quadratic or len(new_points) == 2:
- new_segments.append(["qcurve", new_points])
- else:
- new_segments.append(["curve", new_points])
- prev_on_curve = sub_points[-1][0]
- else:
- new_segments.append([segment_type, points])
- prev_on_curve = points[-1][0]
- if closed:
- # the BasePointToSegmentPen.endPath method that calls _flushContour
- # rotates the point list of closed contours so that they end with
- # the first on-curve point. We restore the original starting point.
- new_segments = new_segments[-1:] + new_segments[:-1]
- self._drawPoints(new_segments)
-
- def _split_super_bezier_segments(self, points):
- sub_segments = []
- # n is the number of control points
- n = len(points) - 1
- if n == 2:
- # a simple bezier curve segment
- sub_segments.append(points)
- elif n > 2:
- # a "super" bezier; decompose it
- on_curve, smooth, name, kwargs = points[-1]
- num_sub_segments = n - 1
- for i, sub_points in enumerate(
- decomposeSuperBezierSegment([pt for pt, _, _, _ in points])
- ):
- new_segment = []
- for point in sub_points[:-1]:
- new_segment.append((point, False, None, {}))
- if i == (num_sub_segments - 1):
- # the last on-curve keeps its original attributes
- new_segment.append((on_curve, smooth, name, kwargs))
- else:
- # on-curves of sub-segments are always "smooth"
- new_segment.append((sub_points[-1], True, None, {}))
- sub_segments.append(new_segment)
- else:
- raise AssertionError("expected 2 control points, found: %d" % n)
- return sub_segments
-
- def _drawPoints(self, segments):
- pen = self.pen
- pen.beginPath()
- last_offcurves = []
- points_required = self.__points_required
- for i, (segment_type, points) in enumerate(segments):
- if segment_type in points_required:
- n, op = points_required[segment_type]
- assert op(len(points), n), (
- f"illegal {segment_type!r} segment point count: "
- f"expected {n}, got {len(points)}"
- )
- offcurves = points[:-1]
- if i == 0:
- # any off-curve points preceding the first on-curve
- # will be appended at the end of the contour
- last_offcurves = offcurves
- else:
- for (pt, smooth, name, kwargs) in offcurves:
- pen.addPoint(pt, None, smooth, name, **kwargs)
- pt, smooth, name, kwargs = points[-1]
- if pt is None:
- assert segment_type == "qcurve"
- # special quadratic contour with no on-curve points:
- # we need to skip the "None" point. See also the Pen
- # protocol's qCurveTo() method and fontTools.pens.basePen
- pass
- else:
- pen.addPoint(pt, segment_type, smooth, name, **kwargs)
- else:
- raise AssertionError("unexpected segment type: %r" % segment_type)
- for (pt, smooth, name, kwargs) in last_offcurves:
- pen.addPoint(pt, None, smooth, name, **kwargs)
- pen.endPath()
-
- def addComponent(self, baseGlyphName, transformation):
- assert self.currentPath is None
- self.pen.addComponent(baseGlyphName, transformation)
-
-
-class Cu2QuMultiPen:
- """A filter multi-pen to convert cubic bezier curves to quadratic b-splines
- in a interpolation-compatible manner, using the FontTools SegmentPen protocol.
-
- Args:
-
- other_pens: list of SegmentPens used to draw the transformed outlines.
- max_err: maximum approximation error in font units. For optimal results,
- if you know the UPEM of the font, we recommend setting this to a
- value equal, or close to UPEM / 1000.
- reverse_direction: flip the contours' direction but keep starting point.
-
- This pen does not follow the normal SegmentPen protocol. Instead, its
- moveTo/lineTo/qCurveTo/curveTo methods take a list of tuples that are
- arguments that would normally be passed to a SegmentPen, one item for
- each of the pens in other_pens.
- """
-
- # TODO Simplify like 3e8ebcdce592fe8a59ca4c3a294cc9724351e1ce
- # Remove start_pts and _add_moveTO
-
- def __init__(self, other_pens, max_err, reverse_direction=False):
- if reverse_direction:
- other_pens = [
- ReverseContourPen(pen, outputImpliedClosingLine=True)
- for pen in other_pens
- ]
- self.pens = other_pens
- self.max_err = max_err
- self.start_pts = None
- self.current_pts = None
-
- def _check_contour_is_open(self):
- if self.current_pts is None:
- raise AssertionError("moveTo is required")
-
- def _check_contour_is_closed(self):
- if self.current_pts is not None:
- raise AssertionError("closePath or endPath is required")
-
- def _add_moveTo(self):
- if self.start_pts is not None:
- for pt, pen in zip(self.start_pts, self.pens):
- pen.moveTo(*pt)
- self.start_pts = None
-
- def moveTo(self, pts):
- self._check_contour_is_closed()
- self.start_pts = self.current_pts = pts
- self._add_moveTo()
-
- def lineTo(self, pts):
- self._check_contour_is_open()
- self._add_moveTo()
- for pt, pen in zip(pts, self.pens):
- pen.lineTo(*pt)
- self.current_pts = pts
-
- def qCurveTo(self, pointsList):
- self._check_contour_is_open()
- if len(pointsList[0]) == 1:
- self.lineTo([(points[0],) for points in pointsList])
- return
- self._add_moveTo()
- current_pts = []
- for points, pen in zip(pointsList, self.pens):
- pen.qCurveTo(*points)
- current_pts.append((points[-1],))
- self.current_pts = current_pts
-
- def _curves_to_quadratic(self, pointsList):
- curves = []
- for current_pt, points in zip(self.current_pts, pointsList):
- curves.append(current_pt + points)
- quadratics = curves_to_quadratic(curves, [self.max_err] * len(curves))
- pointsList = []
- for quadratic in quadratics:
- pointsList.append(quadratic[1:])
- self.qCurveTo(pointsList)
-
- def curveTo(self, pointsList):
- self._check_contour_is_open()
- self._curves_to_quadratic(pointsList)
-
- def closePath(self):
- self._check_contour_is_open()
- if self.start_pts is None:
- for pen in self.pens:
- pen.closePath()
- self.current_pts = self.start_pts = None
-
- def endPath(self):
- self._check_contour_is_open()
- if self.start_pts is None:
- for pen in self.pens:
- pen.endPath()
- self.current_pts = self.start_pts = None
-
- def addComponent(self, glyphName, transformations):
- self._check_contour_is_closed()
- for trans, pen in zip(transformations, self.pens):
- pen.addComponent(glyphName, trans)
diff --git a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/constants/publicSepToken.ts b/spaces/DaFujaTyping/hf-Chat-ui/src/lib/constants/publicSepToken.ts
deleted file mode 100644
index 15d962d69ba33e1abeb8a35885aa7647d24cf7af..0000000000000000000000000000000000000000
--- a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/constants/publicSepToken.ts
+++ /dev/null
@@ -1 +0,0 @@
-export const PUBLIC_SEP_TOKEN = "";
diff --git a/spaces/Datasculptor/MusicGen/tests/modules/test_rope.py b/spaces/Datasculptor/MusicGen/tests/modules/test_rope.py
deleted file mode 100644
index 067c6f067acbf27fb0fef5c2b812c22474c4fcd0..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/MusicGen/tests/modules/test_rope.py
+++ /dev/null
@@ -1,168 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-from audiocraft.modules.rope import RotaryEmbedding
-from audiocraft.modules.transformer import StreamingTransformer, set_efficient_attention_backend
-
-
-def test_rope():
- set_efficient_attention_backend('xformers')
- B, T, H, C = 8, 75, 16, 128
-
- rope = RotaryEmbedding(dim=C)
- xq = torch.rand((B, T, H, C))
- xk = torch.rand((B, T, H, C))
- xq_out, xk_out = rope.rotate_qk(xq, xk, start=7)
-
- assert list(xq_out.shape) == [B, T, H, C]
- assert list(xk_out.shape) == [B, T, H, C]
-
-
-def test_rope_io_dtypes():
- set_efficient_attention_backend('xformers')
- B, T, H, C = 8, 75, 16, 128
-
- rope_32 = RotaryEmbedding(dim=C, dtype=torch.float32)
- rope_64 = RotaryEmbedding(dim=C, dtype=torch.float64)
-
- # Test bfloat16 inputs w/ both 32 and 64 precision rope.
- xq_16 = torch.rand((B, T, H, C)).to(torch.bfloat16)
- xk_16 = torch.rand((B, T, H, C)).to(torch.bfloat16)
- xq_out, xk_out = rope_32.rotate_qk(xq_16, xk_16)
- assert xq_out.dtype == torch.bfloat16
- xq_out, xk_out = rope_64.rotate_qk(xq_16, xk_16)
- assert xq_out.dtype == torch.bfloat16
-
- # Test float32 inputs w/ both 32 and 64 precision rope.
- xq_32 = torch.rand((B, T, H, C)).to(torch.float32)
- xk_32 = torch.rand((B, T, H, C)).to(torch.float32)
- xq_out, xk_out = rope_32.rotate_qk(xq_32, xk_32)
- assert xq_out.dtype == torch.float32
- xq_out, xk_out = rope_64.rotate_qk(xq_32, xk_32)
- assert xq_out.dtype == torch.float32
-
-
-def test_transformer_with_rope():
- set_efficient_attention_backend('xformers')
- torch.manual_seed(1234)
- for pos in ['rope', 'sin_rope']:
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., layer_scale=0.1,
- positional_embedding=pos)
- tr.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- out = tr(x)
- assert list(out.shape) == list(x.shape)
-
-
-@torch.no_grad()
-def test_rope_streaming():
- set_efficient_attention_backend('xformers')
- torch.manual_seed(1234)
- tr = StreamingTransformer(
- 16, 4, 2, causal=True, dropout=0.,
- custom=True, positional_embedding='rope')
- tr.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- ref = tr(x)
-
- with tr.streaming():
- outs = []
- frame_sizes = [1] * steps
-
- for frame_size in frame_sizes:
- frame = x[:, :frame_size]
- x = x[:, frame_size:]
- outs.append(tr(frame))
-
- out = torch.cat(outs, dim=1)
- assert list(out.shape) == [3, steps, 16]
- delta = torch.norm(out - ref) / torch.norm(out)
- assert delta < 1e-6, delta
-
-
-@torch.no_grad()
-def test_rope_streaming_past_context():
- set_efficient_attention_backend('xformers')
- torch.manual_seed(1234)
-
- for context in [None, 10]:
- tr = StreamingTransformer(
- 16, 4, 1 if context else 2,
- causal=True, past_context=context, custom=True,
- dropout=0., positional_embedding='rope')
- tr.eval()
-
- steps = 20
- x = torch.randn(3, steps, 16)
- ref = tr(x)
-
- with tr.streaming():
- outs = []
- frame_sizes = [1] * steps
-
- for frame_size in frame_sizes:
- frame = x[:, :frame_size]
- x = x[:, frame_size:]
- outs.append(tr(frame))
-
- out = torch.cat(outs, dim=1)
- assert list(out.shape) == [3, steps, 16]
- delta = torch.norm(out - ref) / torch.norm(out)
- assert delta < 1e-6, delta
-
-
-def test_rope_memory_efficient():
- set_efficient_attention_backend('xformers')
- torch.manual_seed(1234)
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., layer_scale=0.1,
- positional_embedding='rope')
- tr_mem_efficient = StreamingTransformer(
- 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1,
- positional_embedding='rope')
- tr_mem_efficient.load_state_dict(tr.state_dict())
- tr.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- with torch.no_grad():
- y = tr(x)
- y2 = tr_mem_efficient(x)
- # Check at float precision b/c this is the rope default.
- assert torch.allclose(y, y2, atol=1e-7), (y - y2).norm()
-
-
-def test_rope_with_xpos():
- set_efficient_attention_backend('xformers')
- B, T, H, C = 8, 75, 16, 128
-
- rope = RotaryEmbedding(dim=C, xpos=True)
- xq = torch.rand((B, T, H, C))
- xk = torch.rand((B, T, H, C))
- xq_out, xk_out = rope.rotate_qk(xq, xk, start=7)
-
- assert list(xq_out.shape) == [B, T, H, C]
- assert list(xk_out.shape) == [B, T, H, C]
-
-
-def test_positional_scale():
- set_efficient_attention_backend('xformers')
- B, T, H, C = 8, 75, 16, 128
-
- rope = RotaryEmbedding(dim=C, xpos=True, scale=0.0)
- xq = torch.rand((B, T, H, C))
- xk = torch.rand((B, T, H, C))
- xq_out, xk_out = rope.rotate_qk(xq, xk, start=7)
-
- assert torch.allclose(xq, xq_out)
- assert torch.allclose(xk, xk_out)
diff --git a/spaces/DiamondYin/AnewGame/Build/WaliwebGLgameFPS.framework.js b/spaces/DiamondYin/AnewGame/Build/WaliwebGLgameFPS.framework.js
deleted file mode 100644
index 4ba2a954f48a58c730cad5e1c7a8c89334f16626..0000000000000000000000000000000000000000
--- a/spaces/DiamondYin/AnewGame/Build/WaliwebGLgameFPS.framework.js
+++ /dev/null
@@ -1,22 +0,0 @@
-
-var unityFramework = (() => {
- var _scriptDir = typeof document !== 'undefined' && document.currentScript ? document.currentScript.src : undefined;
- if (typeof __filename !== 'undefined') _scriptDir = _scriptDir || __filename;
- return (
-function(unityFramework) {
- unityFramework = unityFramework || {};
-
-var Module=typeof unityFramework!="undefined"?unityFramework:{};var readyPromiseResolve,readyPromiseReject;Module["ready"]=new Promise(function(resolve,reject){readyPromiseResolve=resolve;readyPromiseReject=reject});
-function Pointer_stringify(s,len){warnOnce("The JavaScript function 'Pointer_stringify(ptrToSomeCString)' is obsoleted and will be removed in a future Unity version. Please call 'UTF8ToString(ptrToSomeCString)' instead.");return UTF8ToString(s,len)}Module["Pointer_stringify"]=Pointer_stringify;var stackTraceReference="(^|\\n)(\\s+at\\s+|)jsStackTrace(\\s+\\(|@)([^\\n]+):\\d+:\\d+(\\)|)(\\n|$)";var stackTraceReferenceMatch=jsStackTrace().match(new RegExp(stackTraceReference));if(stackTraceReferenceMatch)Module.stackTraceRegExp=new RegExp(stackTraceReference.replace("([^\\n]+)",stackTraceReferenceMatch[4].replace(/[\\^${}[\]().*+?|]/g,"\\$&")).replace("jsStackTrace","[^\\n]+"));var abort=function(what){if(ABORT)return;ABORT=true;EXITSTATUS=1;if(typeof ENVIRONMENT_IS_PTHREAD!=="undefined"&&ENVIRONMENT_IS_PTHREAD)console.error("Pthread aborting at "+(new Error).stack);if(what!==undefined){out(what);err(what);what=JSON.stringify(what)}else{what=""}var message="abort("+what+") at "+stackTrace();if(Module.abortHandler&&Module.abortHandler(message))return;throw message};Module["SetFullscreen"]=function(fullscreen){if(typeof runtimeInitialized==="undefined"||!runtimeInitialized){console.log("Runtime not initialized yet.")}else if(typeof JSEvents==="undefined"){console.log("Player not loaded yet.")}else{var tmp=JSEvents.canPerformEventHandlerRequests;JSEvents.canPerformEventHandlerRequests=function(){return 1};Module.ccall("SetFullscreen",null,["number"],[fullscreen]);JSEvents.canPerformEventHandlerRequests=tmp}};if(!Module["ENVIRONMENT_IS_PTHREAD"]){Module["preRun"].push(function(){var unityFileSystemInit=Module["unityFileSystemInit"]||function(){FS.mkdir("/idbfs");FS.mount(IDBFS,{},"/idbfs");Module.addRunDependency("JS_FileSystem_Mount");FS.syncfs(true,function(err){if(err)console.log("IndexedDB is not available. Data will not persist in cache and PlayerPrefs will not be saved.");Module.removeRunDependency("JS_FileSystem_Mount")})};unityFileSystemInit()})}var videoInputDevices=[];var videoInputDevicesEnumerated=false;var removeEnumerateMediaDevicesRunDependency;var enumerateWatchdog=null;function matchToOldDevice(newDevice){var oldDevices=Object.keys(videoInputDevices);for(var i=0;i{throw toThrow};var ENVIRONMENT_IS_WEB=typeof window=="object";var ENVIRONMENT_IS_WORKER=typeof importScripts=="function";var ENVIRONMENT_IS_NODE=typeof process=="object"&&typeof process.versions=="object"&&typeof process.versions.node=="string";var scriptDirectory="";function locateFile(path){if(Module["locateFile"]){return Module["locateFile"](path,scriptDirectory)}return scriptDirectory+path}var read_,readAsync,readBinary,setWindowTitle;function logExceptionOnExit(e){if(e instanceof ExitStatus)return;let toLog=e;err("exiting due to exception: "+toLog)}var fs;var nodePath;var requireNodeFS;if(ENVIRONMENT_IS_NODE){if(ENVIRONMENT_IS_WORKER){scriptDirectory=require("path").dirname(scriptDirectory)+"/"}else{scriptDirectory=__dirname+"/"}requireNodeFS=(()=>{if(!nodePath){fs=require("fs");nodePath=require("path")}});read_=function shell_read(filename,binary){requireNodeFS();filename=nodePath["normalize"](filename);return fs.readFileSync(filename,binary?undefined:"utf8")};readBinary=(filename=>{var ret=read_(filename,true);if(!ret.buffer){ret=new Uint8Array(ret)}return ret});readAsync=((filename,onload,onerror)=>{requireNodeFS();filename=nodePath["normalize"](filename);fs.readFile(filename,function(err,data){if(err)onerror(err);else onload(data.buffer)})});if(process["argv"].length>1){thisProgram=process["argv"][1].replace(/\\/g,"/")}arguments_=process["argv"].slice(2);process["on"]("uncaughtException",function(ex){if(!(ex instanceof ExitStatus)){throw ex}});process["on"]("unhandledRejection",function(reason){throw reason});quit_=((status,toThrow)=>{if(keepRuntimeAlive()){process["exitCode"]=status;throw toThrow}logExceptionOnExit(toThrow);process["exit"](status)});Module["inspect"]=function(){return"[Emscripten Module object]"}}else if(ENVIRONMENT_IS_WEB||ENVIRONMENT_IS_WORKER){if(ENVIRONMENT_IS_WORKER){scriptDirectory=self.location.href}else if(typeof document!="undefined"&&document.currentScript){scriptDirectory=document.currentScript.src}if(_scriptDir){scriptDirectory=_scriptDir}if(scriptDirectory.indexOf("blob:")!==0){scriptDirectory=scriptDirectory.substr(0,scriptDirectory.replace(/[?#].*/,"").lastIndexOf("/")+1)}else{scriptDirectory=""}{read_=(url=>{var xhr=new XMLHttpRequest;xhr.open("GET",url,false);xhr.send(null);return xhr.responseText});if(ENVIRONMENT_IS_WORKER){readBinary=(url=>{var xhr=new XMLHttpRequest;xhr.open("GET",url,false);xhr.responseType="arraybuffer";xhr.send(null);return new Uint8Array(xhr.response)})}readAsync=((url,onload,onerror)=>{var xhr=new XMLHttpRequest;xhr.open("GET",url,true);xhr.responseType="arraybuffer";xhr.onload=(()=>{if(xhr.status==200||xhr.status==0&&xhr.response){onload(xhr.response);return}onerror()});xhr.onerror=onerror;xhr.send(null)})}setWindowTitle=(title=>document.title=title)}else{}var out=Module["print"]||console.log.bind(console);var err=Module["printErr"]||console.warn.bind(console);Object.assign(Module,moduleOverrides);moduleOverrides=null;if(Module["arguments"])arguments_=Module["arguments"];if(Module["thisProgram"])thisProgram=Module["thisProgram"];if(Module["quit"])quit_=Module["quit"];var POINTER_SIZE=4;function warnOnce(text){if(!warnOnce.shown)warnOnce.shown={};if(!warnOnce.shown[text]){warnOnce.shown[text]=1;err(text)}}function convertJsFunctionToWasm(func,sig){if(typeof WebAssembly.Function=="function"){var typeNames={"i":"i32","j":"i64","f":"f32","d":"f64"};var type={parameters:[],results:sig[0]=="v"?[]:[typeNames[sig[0]]]};for(var i=1;i{tempRet0=value};var getTempRet0=()=>tempRet0;var wasmBinary;if(Module["wasmBinary"])wasmBinary=Module["wasmBinary"];var noExitRuntime=Module["noExitRuntime"]||true;if(typeof WebAssembly!="object"){abort("no native wasm support detected")}var wasmMemory;var ABORT=false;var EXITSTATUS;function assert(condition,text){if(!condition){abort(text)}}function getCFunc(ident){var func=Module["_"+ident];return func}function ccall(ident,returnType,argTypes,args,opts){var toC={"string":function(str){var ret=0;if(str!==null&&str!==undefined&&str!==0){var len=(str.length<<2)+1;ret=stackAlloc(len);stringToUTF8(str,ret,len)}return ret},"array":function(arr){var ret=stackAlloc(arr.length);writeArrayToMemory(arr,ret);return ret}};function convertReturnValue(ret){if(returnType==="string")return UTF8ToString(ret);if(returnType==="boolean")return Boolean(ret);return ret}var func=getCFunc(ident);var cArgs=[];var stack=0;if(args){for(var i=0;i=endIdx))++endPtr;if(endPtr-idx>16&&heapOrArray.buffer&&UTF8Decoder){return UTF8Decoder.decode(heapOrArray.subarray(idx,endPtr))}else{var str="";while(idx>10,56320|ch&1023)}}}return str}function UTF8ToString(ptr,maxBytesToRead){return ptr?UTF8ArrayToString(HEAPU8,ptr,maxBytesToRead):""}function stringToUTF8Array(str,heap,outIdx,maxBytesToWrite){if(!(maxBytesToWrite>0))return 0;var startIdx=outIdx;var endIdx=outIdx+maxBytesToWrite-1;for(var i=0;i=55296&&u<=57343){var u1=str.charCodeAt(++i);u=65536+((u&1023)<<10)|u1&1023}if(u<=127){if(outIdx>=endIdx)break;heap[outIdx++]=u}else if(u<=2047){if(outIdx+1>=endIdx)break;heap[outIdx++]=192|u>>6;heap[outIdx++]=128|u&63}else if(u<=65535){if(outIdx+2>=endIdx)break;heap[outIdx++]=224|u>>12;heap[outIdx++]=128|u>>6&63;heap[outIdx++]=128|u&63}else{if(outIdx+3>=endIdx)break;heap[outIdx++]=240|u>>18;heap[outIdx++]=128|u>>12&63;heap[outIdx++]=128|u>>6&63;heap[outIdx++]=128|u&63}}heap[outIdx]=0;return outIdx-startIdx}function stringToUTF8(str,outPtr,maxBytesToWrite){return stringToUTF8Array(str,HEAPU8,outPtr,maxBytesToWrite)}function lengthBytesUTF8(str){var len=0;for(var i=0;i=55296&&u<=57343)u=65536+((u&1023)<<10)|str.charCodeAt(++i)&1023;if(u<=127)++len;else if(u<=2047)len+=2;else if(u<=65535)len+=3;else len+=4}return len}var UTF16Decoder=typeof TextDecoder!="undefined"?new TextDecoder("utf-16le"):undefined;function allocateUTF8(str){var size=lengthBytesUTF8(str)+1;var ret=_malloc(size);if(ret)stringToUTF8Array(str,HEAP8,ret,size);return ret}function allocateUTF8OnStack(str){var size=lengthBytesUTF8(str)+1;var ret=stackAlloc(size);stringToUTF8Array(str,HEAP8,ret,size);return ret}function writeArrayToMemory(array,buffer){HEAP8.set(array,buffer)}function writeAsciiToMemory(str,buffer,dontAddNull){for(var i=0;i>0]=str.charCodeAt(i)}if(!dontAddNull)HEAP8[buffer>>0]=0}var buffer,HEAP8,HEAPU8,HEAP16,HEAPU16,HEAP32,HEAPU32,HEAPF32,HEAPF64;function updateGlobalBufferAndViews(buf){buffer=buf;Module["HEAP8"]=HEAP8=new Int8Array(buf);Module["HEAP16"]=HEAP16=new Int16Array(buf);Module["HEAP32"]=HEAP32=new Int32Array(buf);Module["HEAPU8"]=HEAPU8=new Uint8Array(buf);Module["HEAPU16"]=HEAPU16=new Uint16Array(buf);Module["HEAPU32"]=HEAPU32=new Uint32Array(buf);Module["HEAPF32"]=HEAPF32=new Float32Array(buf);Module["HEAPF64"]=HEAPF64=new Float64Array(buf)}var INITIAL_MEMORY=Module["INITIAL_MEMORY"]||33554432;var wasmTable;var __ATPRERUN__=[];var __ATINIT__=[];var __ATMAIN__=[];var __ATEXIT__=[];var __ATPOSTRUN__=[];var runtimeInitialized=false;function keepRuntimeAlive(){return noExitRuntime}function preRun(){if(Module["preRun"]){if(typeof Module["preRun"]=="function")Module["preRun"]=[Module["preRun"]];while(Module["preRun"].length){addOnPreRun(Module["preRun"].shift())}}callRuntimeCallbacks(__ATPRERUN__)}function initRuntime(){runtimeInitialized=true;if(!Module["noFSInit"]&&!FS.init.initialized)FS.init();FS.ignorePermissions=false;TTY.init();SOCKFS.root=FS.mount(SOCKFS,{},null);PIPEFS.root=FS.mount(PIPEFS,{},null);callRuntimeCallbacks(__ATINIT__)}function preMain(){callRuntimeCallbacks(__ATMAIN__)}function postRun(){if(Module["postRun"]){if(typeof Module["postRun"]=="function")Module["postRun"]=[Module["postRun"]];while(Module["postRun"].length){addOnPostRun(Module["postRun"].shift())}}callRuntimeCallbacks(__ATPOSTRUN__)}function addOnPreRun(cb){__ATPRERUN__.unshift(cb)}function addOnInit(cb){__ATINIT__.unshift(cb)}function addOnPostRun(cb){__ATPOSTRUN__.unshift(cb)}var runDependencies=0;var runDependencyWatcher=null;var dependenciesFulfilled=null;function getUniqueRunDependency(id){return id}function addRunDependency(id){runDependencies++;if(Module["monitorRunDependencies"]){Module["monitorRunDependencies"](runDependencies)}}function removeRunDependency(id){runDependencies--;if(Module["monitorRunDependencies"]){Module["monitorRunDependencies"](runDependencies)}if(runDependencies==0){if(runDependencyWatcher!==null){clearInterval(runDependencyWatcher);runDependencyWatcher=null}if(dependenciesFulfilled){var callback=dependenciesFulfilled;dependenciesFulfilled=null;callback()}}}Module["preloadedImages"]={};Module["preloadedAudios"]={};function abort(what){{if(Module["onAbort"]){Module["onAbort"](what)}}what="Aborted("+what+")";err(what);ABORT=true;EXITSTATUS=1;what+=". Build with -s ASSERTIONS=1 for more info.";var e=new WebAssembly.RuntimeError(what);readyPromiseReject(e);throw e}var dataURIPrefix="data:application/octet-stream;base64,";function isDataURI(filename){return filename.startsWith(dataURIPrefix)}function isFileURI(filename){return filename.startsWith("file://")}var wasmBinaryFile;wasmBinaryFile="build.wasm";if(!isDataURI(wasmBinaryFile)){wasmBinaryFile=locateFile(wasmBinaryFile)}function getBinary(file){try{if(file==wasmBinaryFile&&wasmBinary){return new Uint8Array(wasmBinary)}if(readBinary){return readBinary(file)}else{throw"both async and sync fetching of the wasm failed"}}catch(err){abort(err)}}function getBinaryPromise(){if(!wasmBinary&&(ENVIRONMENT_IS_WEB||ENVIRONMENT_IS_WORKER)){if(typeof fetch=="function"&&!isFileURI(wasmBinaryFile)){return fetch(wasmBinaryFile,{credentials:"same-origin"}).then(function(response){if(!response["ok"]){throw"failed to load wasm binary file at '"+wasmBinaryFile+"'"}return response["arrayBuffer"]()}).catch(function(){return getBinary(wasmBinaryFile)})}else{if(readAsync){return new Promise(function(resolve,reject){readAsync(wasmBinaryFile,function(response){resolve(new Uint8Array(response))},reject)})}}}return Promise.resolve().then(function(){return getBinary(wasmBinaryFile)})}function createWasm(){var info={"env":asmLibraryArg,"wasi_snapshot_preview1":asmLibraryArg};function receiveInstance(instance,module){var exports=instance.exports;Module["asm"]=exports;wasmMemory=Module["asm"]["memory"];updateGlobalBufferAndViews(wasmMemory.buffer);wasmTable=Module["asm"]["__indirect_function_table"];addOnInit(Module["asm"]["__wasm_call_ctors"]);removeRunDependency("wasm-instantiate")}addRunDependency("wasm-instantiate");function receiveInstantiationResult(result){receiveInstance(result["instance"])}function instantiateArrayBuffer(receiver){return getBinaryPromise().then(function(binary){return WebAssembly.instantiate(binary,info)}).then(function(instance){return instance}).then(receiver,function(reason){err("failed to asynchronously prepare wasm: "+reason);abort(reason)})}function instantiateAsync(){if(!wasmBinary&&typeof WebAssembly.instantiateStreaming=="function"&&!isDataURI(wasmBinaryFile)&&!isFileURI(wasmBinaryFile)&&typeof fetch=="function"){return fetch(wasmBinaryFile,{credentials:"same-origin"}).then(function(response){var result=WebAssembly.instantiateStreaming(response,info);return result.then(receiveInstantiationResult,function(reason){err("wasm streaming compile failed: "+reason);err("falling back to ArrayBuffer instantiation");return instantiateArrayBuffer(receiveInstantiationResult)})})}else{return instantiateArrayBuffer(receiveInstantiationResult)}}if(Module["instantiateWasm"]){try{var exports=Module["instantiateWasm"](info,receiveInstance);return exports}catch(e){err("Module.instantiateWasm callback failed with error: "+e);return false}}instantiateAsync().catch(readyPromiseReject);return{}}var tempDouble;var tempI64;var ASM_CONSTS={4007452:function(){return Module.webglContextAttributes.premultipliedAlpha},4007513:function(){return Module.webglContextAttributes.preserveDrawingBuffer},4007577:function(){return Module.webglContextAttributes.powerPreference}};function callRuntimeCallbacks(callbacks){while(callbacks.length>0){var callback=callbacks.shift();if(typeof callback=="function"){callback(Module);continue}var func=callback.func;if(typeof func=="number"){if(callback.arg===undefined){(function(){dynCall_v.call(null,func)})()}else{(function(a1){dynCall_vi.apply(null,[func,a1])})(callback.arg)}}else{func(callback.arg===undefined?null:callback.arg)}}}function withStackSave(f){var stack=stackSave();var ret=f();stackRestore(stack);return ret}function demangle(func){return func}function demangleAll(text){var regex=/\b_Z[\w\d_]+/g;return text.replace(regex,function(x){var y=demangle(x);return x===y?x:y+" ["+x+"]"})}function dynCallLegacy(sig,ptr,args){var f=Module["dynCall_"+sig];return args&&args.length?f.apply(null,[ptr].concat(args)):f.call(null,ptr)}var wasmTableMirror=[];function getWasmTableEntry(funcPtr){var func=wasmTableMirror[funcPtr];if(!func){if(funcPtr>=wasmTableMirror.length)wasmTableMirror.length=funcPtr+1;wasmTableMirror[funcPtr]=func=wasmTable.get(funcPtr)}return func}function dynCall(sig,ptr,args){return dynCallLegacy(sig,ptr,args)}function handleException(e){if(e instanceof ExitStatus||e=="unwind"){return EXITSTATUS}quit_(1,e)}function jsStackTrace(){var error=new Error;if(!error.stack){try{throw new Error}catch(e){error=e}if(!error.stack){return"(no stack trace available)"}}return error.stack.toString()}function setWasmTableEntry(idx,func){wasmTable.set(idx,func);wasmTableMirror[idx]=func}function stackTrace(){var js=jsStackTrace();if(Module["extraStackTrace"])js+="\n"+Module["extraStackTrace"]();return demangleAll(js)}function _GetJSMemoryInfo(totalJSptr,usedJSptr){if(performance.memory){HEAPF64[totalJSptr>>3]=performance.memory.totalJSHeapSize;HEAPF64[usedJSptr>>3]=performance.memory.usedJSHeapSize}else{HEAPF64[totalJSptr>>3]=NaN;HEAPF64[usedJSptr>>3]=NaN}}var JS_Accelerometer=null;var JS_Accelerometer_callback=0;function _JS_Accelerometer_IsRunning(){return JS_Accelerometer&&JS_Accelerometer.activated||JS_Accelerometer_callback!=0}var JS_Accelerometer_multiplier=1;var JS_Accelerometer_lastValue={x:0,y:0,z:0};function JS_Accelerometer_eventHandler(){JS_Accelerometer_lastValue={x:JS_Accelerometer.x*JS_Accelerometer_multiplier,y:JS_Accelerometer.y*JS_Accelerometer_multiplier,z:JS_Accelerometer.z*JS_Accelerometer_multiplier};if(JS_Accelerometer_callback!=0)dynCall_vfff(JS_Accelerometer_callback,JS_Accelerometer_lastValue.x,JS_Accelerometer_lastValue.y,JS_Accelerometer_lastValue.z)}var JS_Accelerometer_frequencyRequest=0;var JS_Accelerometer_frequency=0;var JS_LinearAccelerationSensor_callback=0;var JS_GravitySensor_callback=0;var JS_Gyroscope_callback=0;function JS_ComputeGravity(accelerometerValue,linearAccelerationValue){var difference={x:accelerometerValue.x-linearAccelerationValue.x,y:accelerometerValue.y-linearAccelerationValue.y,z:accelerometerValue.z-linearAccelerationValue.z};var differenceMagnitudeSq=difference.x*difference.x+difference.y*difference.y+difference.z*difference.z;var sum={x:accelerometerValue.x+linearAccelerationValue.x,y:accelerometerValue.y+linearAccelerationValue.y,z:accelerometerValue.z+linearAccelerationValue.z};var sumMagnitudeSq=sum.x*sum.x+sum.y*sum.y+sum.z*sum.z;return differenceMagnitudeSq<=sumMagnitudeSq?difference:sum}function JS_DeviceMotion_eventHandler(event){var accelerometerValue={x:event.accelerationIncludingGravity.x*JS_Accelerometer_multiplier,y:event.accelerationIncludingGravity.y*JS_Accelerometer_multiplier,z:event.accelerationIncludingGravity.z*JS_Accelerometer_multiplier};if(JS_Accelerometer_callback!=0)dynCall_vfff(JS_Accelerometer_callback,accelerometerValue.x,accelerometerValue.y,accelerometerValue.z);var linearAccelerationValue={x:event.acceleration.x*JS_Accelerometer_multiplier,y:event.acceleration.y*JS_Accelerometer_multiplier,z:event.acceleration.z*JS_Accelerometer_multiplier};if(JS_LinearAccelerationSensor_callback!=0)dynCall_vfff(JS_LinearAccelerationSensor_callback,linearAccelerationValue.x,linearAccelerationValue.y,linearAccelerationValue.z);if(JS_GravitySensor_callback!=0){var gravityValue=JS_ComputeGravity(accelerometerValue,linearAccelerationValue);dynCall_vfff(JS_GravitySensor_callback,gravityValue.x,gravityValue.y,gravityValue.z)}if(JS_Gyroscope_callback!=0){var degToRad=Math.PI/180;dynCall_vfff(JS_Gyroscope_callback,event.rotationRate.alpha*degToRad,event.rotationRate.beta*degToRad,event.rotationRate.gamma*degToRad)}}var JS_DeviceSensorPermissions=0;function JS_RequestDeviceSensorPermissions(permissions){if(permissions&1){if(typeof DeviceOrientationEvent.requestPermission==="function"){DeviceOrientationEvent.requestPermission().then(function(permissionState){if(permissionState==="granted"){JS_DeviceSensorPermissions&=~1}else{warnOnce("DeviceOrientationEvent permission not granted")}}).catch(function(err){warnOnce(err);JS_DeviceSensorPermissions|=1})}}if(permissions&2){if(typeof DeviceMotionEvent.requestPermission==="function"){DeviceMotionEvent.requestPermission().then(function(permissionState){if(permissionState==="granted"){JS_DeviceSensorPermissions&=~2}else{warnOnce("DeviceMotionEvent permission not granted")}}).catch(function(err){warnOnce(err);JS_DeviceSensorPermissions|=2})}}}function JS_DeviceMotion_add(){if(JS_Accelerometer_callback==0&&JS_LinearAccelerationSensor_callback==0&&JS_GravitySensor_callback==0&&JS_Gyroscope_callback==0){JS_RequestDeviceSensorPermissions(2);window.addEventListener("devicemotion",JS_DeviceMotion_eventHandler)}}function JS_DefineAccelerometerMultiplier(){var g=9.80665;JS_Accelerometer_multiplier=/(iPhone|iPad|Macintosh)/i.test(navigator.userAgent)?1/g:-1/g}function _JS_Accelerometer_Start(callback,frequency){JS_DefineAccelerometerMultiplier();if(typeof Accelerometer==="undefined"){JS_DeviceMotion_add();if(callback!=0)JS_Accelerometer_callback=callback;return}if(callback!=0)JS_Accelerometer_callback=callback;function InitializeAccelerometer(frequency){JS_Accelerometer=new Accelerometer({frequency:frequency,referenceFrame:"device"});JS_Accelerometer.addEventListener("reading",JS_Accelerometer_eventHandler);JS_Accelerometer.addEventListener("error",function(e){warnOnce(e.error?e.error:e)});JS_Accelerometer.start();JS_Accelerometer_frequency=frequency}if(JS_Accelerometer){if(JS_Accelerometer_frequency!=frequency){JS_Accelerometer.stop();JS_Accelerometer.removeEventListener("reading",JS_Accelerometer_eventHandler);InitializeAccelerometer(frequency)}}else if(JS_Accelerometer_frequencyRequest!=0){JS_Accelerometer_frequencyRequest=frequency}else{JS_Accelerometer_frequencyRequest=frequency;navigator.permissions.query({name:"accelerometer"}).then(function(result){if(result.state==="granted"){InitializeAccelerometer(JS_Accelerometer_frequencyRequest)}else{warnOnce("No permission to use Accelerometer.")}JS_Accelerometer_frequencyRequest=0})}}function JS_DeviceMotion_remove(){if(JS_Accelerometer_callback==0&&JS_LinearAccelerationSensor_callback==0&&JS_GravitySensor_callback==0&&JS_Gyroscope_callback==0){window.removeEventListener("devicemotion",JS_DeviceOrientation_eventHandler)}}function _JS_Accelerometer_Stop(){if(JS_Accelerometer){if(typeof GravitySensor!=="undefined"||JS_GravitySensor_callback==0){JS_Accelerometer.stop();JS_Accelerometer.removeEventListener("reading",JS_Accelerometer_eventHandler);JS_Accelerometer=null}JS_Accelerometer_callback=0;JS_Accelerometer_frequency=0}else if(JS_Accelerometer_callback!=0){JS_Accelerometer_callback=0;JS_DeviceMotion_remove()}}function _JS_Cursor_SetImage(ptr,length){var binary="";for(var i=0;i>2]=viewportX-(rect?rect.left:0);HEAPU32[targetY>>2]=viewportY-(rect?rect.top:0)}function stringToNewUTF8(jsString){var length=lengthBytesUTF8(jsString)+1;var cString=_malloc(length);stringToUTF8(jsString,cString,length);return cString}function _JS_DOM_UnityCanvasSelector(){var canvasSelector=jsCanvasSelector();if(_JS_DOM_UnityCanvasSelector.selector!=canvasSelector){_free(_JS_DOM_UnityCanvasSelector.ptr);_JS_DOM_UnityCanvasSelector.ptr=stringToNewUTF8(canvasSelector);_JS_DOM_UnityCanvasSelector.selector=canvasSelector}return _JS_DOM_UnityCanvasSelector.ptr}function _JS_Eval_OpenURL(ptr){var str=UTF8ToString(ptr);window.open(str,"_blank","")}var fs={numPendingSync:0,syncInternal:1e3,syncInProgress:false,sync:function(onlyPendingSync){if(onlyPendingSync){if(fs.numPendingSync==0)return}else if(fs.syncInProgress){fs.numPendingSync++;return}fs.syncInProgress=true;FS.syncfs(false,function(err){fs.syncInProgress=false});fs.numPendingSync=0}};function _JS_FileSystem_Initialize(){Module.setInterval(function(){fs.sync(true)},fs.syncInternal)}function _JS_FileSystem_Sync(){fs.sync(false)}var JS_GravitySensor=null;function _JS_GravitySensor_IsRunning(){return typeof GravitySensor!=="undefined"?JS_GravitySensor&&JS_GravitySensor.activated:JS_GravitySensor_callback!=0}function JS_GravitySensor_eventHandler(){if(JS_GravitySensor_callback!=0)dynCall_vfff(JS_GravitySensor_callback,JS_GravitySensor.x*JS_Accelerometer_multiplier,JS_GravitySensor.y*JS_Accelerometer_multiplier,JS_GravitySensor.z*JS_Accelerometer_multiplier)}var JS_GravitySensor_frequencyRequest=0;var JS_LinearAccelerationSensor=null;function JS_LinearAccelerationSensor_eventHandler(){var linearAccelerationValue={x:JS_LinearAccelerationSensor.x*JS_Accelerometer_multiplier,y:JS_LinearAccelerationSensor.y*JS_Accelerometer_multiplier,z:JS_LinearAccelerationSensor.z*JS_Accelerometer_multiplier};if(JS_LinearAccelerationSensor_callback!=0)dynCall_vfff(JS_LinearAccelerationSensor_callback,linearAccelerationValue.x,linearAccelerationValue.y,linearAccelerationValue.z);if(JS_GravitySensor_callback!=0&&typeof GravitySensor==="undefined"){var gravityValue=JS_ComputeGravity(JS_Accelerometer_lastValue,linearAccelerationValue);dynCall_vfff(JS_GravitySensor_callback,gravityValue.x,gravityValue.y,gravityValue.z)}}var JS_LinearAccelerationSensor_frequencyRequest=0;var JS_LinearAccelerationSensor_frequency=0;function _JS_LinearAccelerationSensor_Start(callback,frequency){JS_DefineAccelerometerMultiplier();if(typeof LinearAccelerationSensor==="undefined"){JS_DeviceMotion_add();if(callback!=0)JS_LinearAccelerationSensor_callback=callback;return}if(callback!=0)JS_LinearAccelerationSensor_callback=callback;function InitializeLinearAccelerationSensor(frequency){JS_LinearAccelerationSensor=new LinearAccelerationSensor({frequency:frequency,referenceFrame:"device"});JS_LinearAccelerationSensor.addEventListener("reading",JS_LinearAccelerationSensor_eventHandler);JS_LinearAccelerationSensor.addEventListener("error",function(e){warnOnce(e.error?e.error:e)});JS_LinearAccelerationSensor.start();JS_LinearAccelerationSensor_frequency=frequency}if(JS_LinearAccelerationSensor){if(JS_LinearAccelerationSensor_frequency!=frequency){JS_LinearAccelerationSensor.stop();JS_LinearAccelerationSensor.removeEventListener("reading",JS_LinearAccelerationSensor_eventHandler);InitializeLinearAccelerationSensor(frequency)}}else if(JS_LinearAccelerationSensor_frequencyRequest!=0){JS_LinearAccelerationSensor_frequencyRequest=frequency}else{JS_LinearAccelerationSensor_frequencyRequest=frequency;navigator.permissions.query({name:"accelerometer"}).then(function(result){if(result.state==="granted"){InitializeLinearAccelerationSensor(JS_LinearAccelerationSensor_frequencyRequest)}else{warnOnce("No permission to use LinearAccelerationSensor.")}JS_LinearAccelerationSensor_frequencyRequest=0})}}function _JS_GravitySensor_Start(callback,frequency){if(typeof GravitySensor==="undefined"){_JS_Accelerometer_Start(0,Math.max(frequency,JS_Accelerometer_frequency));_JS_LinearAccelerationSensor_Start(0,Math.max(frequency,JS_LinearAccelerationSensor_frequency));JS_GravitySensor_callback=callback;return}JS_DefineAccelerometerMultiplier();JS_GravitySensor_callback=callback;function InitializeGravitySensor(frequency){JS_GravitySensor=new GravitySensor({frequency:frequency,referenceFrame:"device"});JS_GravitySensor.addEventListener("reading",JS_GravitySensor_eventHandler);JS_GravitySensor.addEventListener("error",function(e){warnOnce(e.error?e.error:e)});JS_GravitySensor.start()}if(JS_GravitySensor){JS_GravitySensor.stop();JS_GravitySensor.removeEventListener("reading",JS_GravitySensor_eventHandler);InitializeGravitySensor(frequency)}else if(JS_GravitySensor_frequencyRequest!=0){JS_GravitySensor_frequencyRequest=frequency}else{JS_GravitySensor_frequencyRequest=frequency;navigator.permissions.query({name:"accelerometer"}).then(function(result){if(result.state==="granted"){InitializeGravitySensor(JS_GravitySensor_frequencyRequest)}else{warnOnce("No permission to use GravitySensor.")}JS_GravitySensor_frequencyRequest=0})}}function _JS_LinearAccelerationSensor_Stop(){if(JS_LinearAccelerationSensor){if(typeof GravitySensor!=="undefined"||JS_GravitySensor_callback==0){JS_LinearAccelerationSensor.stop();JS_LinearAccelerationSensor.removeEventListener("reading",JS_LinearAccelerationSensor_eventHandler);JS_LinearAccelerationSensor=null}JS_LinearAccelerationSensor_callback=0;JS_LinearAccelerationSensor_frequency=0}else if(JS_LinearAccelerationSensor_callback!=0){JS_LinearAccelerationSensor_callback=0;JS_DeviceMotion_remove()}}function _JS_GravitySensor_Stop(){JS_GravitySensor_callback=0;if(typeof GravitySensor==="undefined"){if(JS_Accelerometer_callback==0)_JS_Accelerometer_Stop();if(JS_LinearAccelerationSensor_callback==0)_JS_LinearAccelerationSensor_Stop();return}if(JS_GravitySensor){JS_GravitySensor.stop();JS_GravitySensor.removeEventListener("reading",JS_GravitySensor_eventHandler);JS_GravitySensor=null}}function _JS_GuardAgainstJsExceptions(cb){try{(function(){dynCall_v.call(null,cb)})()}catch(e){console.warn(e)}}var JS_Gyroscope=null;function _JS_Gyroscope_IsRunning(){return JS_Gyroscope&&JS_Gyroscope.activated||JS_Gyroscope_callback!=0}function JS_Gyroscope_eventHandler(){if(JS_Gyroscope_callback!=0)dynCall_vfff(JS_Gyroscope_callback,JS_Gyroscope.x,JS_Gyroscope.y,JS_Gyroscope.z)}var JS_Gyroscope_frequencyRequest=0;function _JS_Gyroscope_Start(callback,frequency){if(typeof Gyroscope==="undefined"){JS_DeviceMotion_add();JS_Gyroscope_callback=callback;return}JS_Gyroscope_callback=callback;function InitializeGyroscope(frequency){JS_Gyroscope=new Gyroscope({frequency:frequency,referenceFrame:"device"});JS_Gyroscope.addEventListener("reading",JS_Gyroscope_eventHandler);JS_Gyroscope.addEventListener("error",function(e){warnOnce(e.error?e.error:e)});JS_Gyroscope.start()}if(JS_Gyroscope){JS_Gyroscope.stop();JS_Gyroscope.removeEventListener("reading",JS_Gyroscope_eventHandler);InitializeGyroscope(frequency)}else if(JS_Gyroscope_frequencyRequest!=0){JS_Gyroscope_frequencyRequest=frequency}else{JS_Gyroscope_frequencyRequest=frequency;navigator.permissions.query({name:"gyroscope"}).then(function(result){if(result.state==="granted"){InitializeGyroscope(JS_Gyroscope_frequencyRequest)}else{warnOnce("No permission to use Gyroscope.")}JS_Gyroscope_frequencyRequest=0})}}function _JS_Gyroscope_Stop(){if(JS_Gyroscope){JS_Gyroscope.stop();JS_Gyroscope.removeEventListener("reading",JS_Gyroscope_eventHandler);JS_Gyroscope=null;JS_Gyroscope_callback=0}else if(JS_Gyroscope_callback!=0){JS_Gyroscope_callback=0;JS_DeviceMotion_remove()}}function _JS_LinearAccelerationSensor_IsRunning(){return JS_LinearAccelerationSensor&&JS_LinearAccelerationSensor.activated||JS_LinearAccelerationSensor_callback!=0}function _JS_Log_Dump(ptr,type){var str=UTF8ToString(ptr);if(typeof dump=="function")dump(str);switch(type){case 0:case 1:case 4:console.error(str);return;case 2:console.warn(str);return;case 3:case 5:console.log(str);return;default:console.error("Unknown console message type!");console.error(str)}}function _JS_Log_StackTrace(buffer,bufferSize){var trace=stackTrace();if(buffer)stringToUTF8(trace,buffer,bufferSize);return lengthBytesUTF8(trace)}var mobile_input_hide_delay=null;var mobile_input_text=null;var mobile_input=null;var mobile_input_ignore_blur_event=false;function _JS_MobileKeybard_GetIgnoreBlurEvent(){return mobile_input_ignore_blur_event}function _JS_MobileKeyboard_GetKeyboardStatus(){var kKeyboardStatusVisible=0;var kKeyboardStatusDone=1;if(!mobile_input)return kKeyboardStatusDone;return kKeyboardStatusVisible}function _JS_MobileKeyboard_GetText(buffer,bufferSize){var text=mobile_input&&mobile_input.input?mobile_input.input.value:mobile_input_text?mobile_input_text:"";if(buffer)stringToUTF8(text,buffer,bufferSize);return lengthBytesUTF8(text)}function _JS_MobileKeyboard_GetTextSelection(outStart,outLength){if(!mobile_input){HEAP32[outStart>>2]=0;HEAP32[outLength>>2]=0;return}HEAP32[outStart>>2]=mobile_input.input.selectionStart;HEAP32[outLength>>2]=mobile_input.input.selectionEnd-mobile_input.input.selectionStart}function _JS_MobileKeyboard_Hide(delay){if(mobile_input_hide_delay)return;mobile_input_ignore_blur_event=true;function hideMobileKeyboard(){if(mobile_input&&mobile_input.input){mobile_input_text=mobile_input.input.value;mobile_input.input=null;if(mobile_input.parentNode&&mobile_input.parentNode){mobile_input.parentNode.removeChild(mobile_input)}}mobile_input=null;mobile_input_hide_delay=null;setTimeout(function(){mobile_input_ignore_blur_event=false},100)}if(delay){var hideDelay=200;mobile_input_hide_delay=setTimeout(hideMobileKeyboard,hideDelay)}else{hideMobileKeyboard()}}function _JS_MobileKeyboard_SetCharacterLimit(limit){if(!mobile_input)return;mobile_input.input.maxLength=limit}function _JS_MobileKeyboard_SetText(text){if(!mobile_input)return;text=UTF8ToString(text);mobile_input.input.value=text}function _JS_MobileKeyboard_SetTextSelection(start,length){if(!mobile_input)return;mobile_input.input.setSelectionRange(start,start+length)}function _JS_MobileKeyboard_Show(text,keyboardType,autocorrection,multiline,secure,alert,placeholder,characterLimit){if(mobile_input_hide_delay){clearTimeout(mobile_input_hide_delay);mobile_input_hide_delay=null}text=UTF8ToString(text);mobile_input_text=text;placeholder=UTF8ToString(placeholder);var container=document.body;var hasExistingMobileInput=!!mobile_input;var input_type;var KEYBOARD_TYPE_NUMBERS_AND_PUNCTUATION=2;var KEYBOARD_TYPE_URL=3;var KEYBOARD_TYPE_NUMBER_PAD=4;var KEYBOARD_TYPE_PHONE_PAD=5;var KEYBOARD_TYPE_EMAIL_ADDRESS=7;if(!secure){switch(keyboardType){case KEYBOARD_TYPE_EMAIL_ADDRESS:input_type="email";break;case KEYBOARD_TYPE_URL:input_type="url";break;case KEYBOARD_TYPE_NUMBERS_AND_PUNCTUATION:case KEYBOARD_TYPE_NUMBER_PAD:case KEYBOARD_TYPE_PHONE_PAD:input_type="number";break;default:input_type="text";break}}else{input_type="password"}if(hasExistingMobileInput){if(mobile_input.multiline!=multiline){_JS_MobileKeyboard_Hide(false);return}}var inputContainer=mobile_input||document.createElement("div");if(!hasExistingMobileInput){inputContainer.style="width:100%; position:fixed; bottom:0px; margin:0px; padding:0px; left:0px; border: 1px solid #000; border-radius: 5px; background-color:#fff; font-size:14pt;";container.appendChild(inputContainer);mobile_input=inputContainer}var input=hasExistingMobileInput?mobile_input.input:document.createElement(multiline?"textarea":"input");mobile_input.multiline=multiline;mobile_input.secure=secure;mobile_input.keyboardType=keyboardType;mobile_input.inputType=input_type;input.type=input_type;input.style="width:calc(100% - 85px); "+(multiline?"height:100px;":"")+"vertical-align:top; border-radius: 5px; outline:none; cursor:default; resize:none; border:0px; padding:10px 0px 10px 10px;";input.spellcheck=autocorrection?true:false;input.maxLength=characterLimit>0?characterLimit:524288;input.value=text;input.placeholder=placeholder;if(!hasExistingMobileInput){inputContainer.appendChild(input);inputContainer.input=input}if(!hasExistingMobileInput){var okButton=document.createElement("button");okButton.innerText="OK";okButton.style="border:0; position:absolute; left:calc(100% - 75px); top:0px; width:75px; height:100%; margin:0; padding:0; border-radius: 5px; background-color:#fff";okButton.addEventListener("touchend",function(){_JS_MobileKeyboard_Hide(true)});inputContainer.appendChild(okButton);inputContainer.okButton=okButton;input.addEventListener("keyup",function(e){if(input.parentNode.multiline)return;if(e.code=="Enter"||e.which==13||e.keyCode==13){_JS_MobileKeyboard_Hide(true)}});input.addEventListener("blur",function(e){_JS_MobileKeyboard_Hide(true);e.stopPropagation();e.preventDefault()});input.select();input.focus()}else{input.select()}}var JS_OrientationSensor=null;var JS_OrientationSensor_callback=0;function _JS_OrientationSensor_IsRunning(){return JS_OrientationSensor&&JS_OrientationSensor.activated||JS_OrientationSensor_callback!=0}function JS_OrientationSensor_eventHandler(){if(JS_OrientationSensor_callback!=0)dynCall_vffff(JS_OrientationSensor_callback,JS_OrientationSensor.quaternion[0],JS_OrientationSensor.quaternion[1],JS_OrientationSensor.quaternion[2],JS_OrientationSensor.quaternion[3])}var JS_OrientationSensor_frequencyRequest=0;function JS_DeviceOrientation_eventHandler(event){if(JS_OrientationSensor_callback){var degToRad=Math.PI/180;var x=event.beta*degToRad;var y=event.gamma*degToRad;var z=event.alpha*degToRad;var cx=Math.cos(x/2);var sx=Math.sin(x/2);var cy=Math.cos(y/2);var sy=Math.sin(y/2);var cz=Math.cos(z/2);var sz=Math.sin(z/2);var qx=sx*cy*cz-cx*sy*sz;var qy=cx*sy*cz+sx*cy*sz;var qz=cx*cy*sz+sx*sy*cz;var qw=cx*cy*cz-sx*sy*sz;dynCall_vffff(JS_OrientationSensor_callback,qx,qy,qz,qw)}}function _JS_OrientationSensor_Start(callback,frequency){if(typeof RelativeOrientationSensor==="undefined"){if(JS_OrientationSensor_callback==0){JS_OrientationSensor_callback=callback;JS_RequestDeviceSensorPermissions(1);window.addEventListener("deviceorientation",JS_DeviceOrientation_eventHandler)}return}JS_OrientationSensor_callback=callback;function InitializeOrientationSensor(frequency){JS_OrientationSensor=new RelativeOrientationSensor({frequency:frequency,referenceFrame:"device"});JS_OrientationSensor.addEventListener("reading",JS_OrientationSensor_eventHandler);JS_OrientationSensor.addEventListener("error",function(e){warnOnce(e.error?e.error:e)});JS_OrientationSensor.start()}if(JS_OrientationSensor){JS_OrientationSensor.stop();JS_OrientationSensor.removeEventListener("reading",JS_OrientationSensor_eventHandler);InitializeOrientationSensor(frequency)}else if(JS_OrientationSensor_frequencyRequest!=0){JS_OrientationSensor_frequencyRequest=frequency}else{JS_OrientationSensor_frequencyRequest=frequency;Promise.all([navigator.permissions.query({name:"accelerometer"}),navigator.permissions.query({name:"gyroscope"})]).then(function(results){if(results.every(function(result){return result.state==="granted"})){InitializeOrientationSensor(JS_OrientationSensor_frequencyRequest)}else{warnOnce("No permissions to use RelativeOrientationSensor.")}JS_OrientationSensor_frequencyRequest=0})}}function _JS_OrientationSensor_Stop(){if(JS_OrientationSensor){JS_OrientationSensor.stop();JS_OrientationSensor.removeEventListener("reading",JS_OrientationSensor_eventHandler);JS_OrientationSensor=null}else if(JS_OrientationSensor_callback!=0){window.removeEventListener("deviceorientation",JS_DeviceOrientation_eventHandler)}JS_OrientationSensor_callback=0}function _JS_RequestDeviceSensorPermissionsOnTouch(){if(JS_DeviceSensorPermissions==0)return;JS_RequestDeviceSensorPermissions(JS_DeviceSensorPermissions)}function _JS_RunQuitCallbacks(){Module.QuitCleanup()}var JS_ScreenOrientation_callback=0;function JS_ScreenOrientation_eventHandler(){if(JS_ScreenOrientation_callback)dynCall_viii(JS_ScreenOrientation_callback,window.innerWidth,window.innerHeight,screen.orientation?screen.orientation.angle:window.orientation)}function _JS_ScreenOrientation_DeInit(){JS_ScreenOrientation_callback=0;window.removeEventListener("resize",JS_ScreenOrientation_eventHandler);if(screen.orientation){screen.orientation.removeEventListener("change",JS_ScreenOrientation_eventHandler)}}function _JS_ScreenOrientation_Init(callback){if(!JS_ScreenOrientation_callback){if(screen.orientation){screen.orientation.addEventListener("change",JS_ScreenOrientation_eventHandler)}window.addEventListener("resize",JS_ScreenOrientation_eventHandler);JS_ScreenOrientation_callback=callback;setTimeout(JS_ScreenOrientation_eventHandler,0)}}var JS_ScreenOrientation_requestedLockType=-1;var JS_ScreenOrientation_appliedLockType=-1;var JS_ScreenOrientation_timeoutID=-1;function _JS_ScreenOrientation_Lock(orientationLockType){if(!screen.orientation){return}function applyLock(){JS_ScreenOrientation_appliedLockType=JS_ScreenOrientation_requestedLockType;var screenOrientations=["any",0,"landscape","portrait","portrait-primary","portrait-secondary","landscape-primary","landscape-secondary"];var type=screenOrientations[JS_ScreenOrientation_appliedLockType];screen.orientation.lock(type).then(function(){if(JS_ScreenOrientation_requestedLockType!=JS_ScreenOrientation_appliedLockType){JS_ScreenOrientation_timeoutID=setTimeout(applyLock,0)}else{JS_ScreenOrientation_timeoutID=-1}}).catch(function(err){warnOnce(err);JS_ScreenOrientation_timeoutID=-1})}JS_ScreenOrientation_requestedLockType=orientationLockType;if(JS_ScreenOrientation_timeoutID==-1&&orientationLockType!=JS_ScreenOrientation_appliedLockType){JS_ScreenOrientation_timeoutID=setTimeout(applyLock,0)}}var WEBAudio={audioInstanceIdCounter:0,audioInstances:{},audioContext:null,audioWebEnabled:0,audioCache:[],pendingAudioSources:{}};function jsAudioMixinSetPitch(source){source.estimatePlaybackPosition=function(){var t=(WEBAudio.audioContext.currentTime-source.playbackStartTime)*source.playbackRate.value;if(source.loop&&t>=source.loopStart){t=(t-source.loopStart)%(source.loopEnd-source.loopStart)+source.loopStart}return t};source.setPitch=function(newPitch){var curPosition=source.estimatePlaybackPosition();if(curPosition>=0){source.playbackStartTime=WEBAudio.audioContext.currentTime-curPosition/newPitch}if(source.playbackRate.value!==newPitch)source.playbackRate.value=newPitch}}function jsAudioCreateUncompressedSoundClip(buffer,error){var soundClip={buffer:buffer,error:error};soundClip.release=function(){};soundClip.getLength=function(){if(!this.buffer){console.log("Trying to get length of sound which is not loaded.");return 0}var sampleRateRatio=44100/this.buffer.sampleRate;return this.buffer.length*sampleRateRatio};soundClip.getData=function(ptr,length){if(!this.buffer){console.log("Trying to get data of sound which is not loaded.");return 0}var startOutputBuffer=ptr>>2;var output=HEAPF32.subarray(startOutputBuffer,startOutputBuffer+(length>>2));var numMaxSamples=Math.floor((length>>2)/this.buffer.numberOfChannels);var numReadSamples=Math.min(this.buffer.length,numMaxSamples);for(var i=0;istartDelayThresholdMS){source.playTimeout=setTimeout(function(){source.playTimeout=null;source._startPlayback(offset)},startDelayMS)}else{source._startPlayback(offset)}};source.stop=function(stopTime){if(typeof stopTime==="undefined"){stopTime=WEBAudio.audioContext.currentTime}var stopDelayThresholdMS=4;var stopDelayMS=(stopTime-WEBAudio.audioContext.currentTime)*1e3;if(stopDelayMS>stopDelayThresholdMS){setTimeout(function(){source._pauseMediaElement();source.isStopped=true},stopDelayMS)}else{source._pauseMediaElement();source.isStopped=true}};jsAudioMixinSetPitch(source);return source};return soundClip}function _JS_Sound_Load(ptr,length,decompress,fmodSoundType){if(WEBAudio.audioWebEnabled==0)return 0;var audioData=HEAPU8.buffer.slice(ptr,ptr+length);if(length<131072)decompress=1;var sound;if(decompress){sound=jsAudioCreateUncompressedSoundClipFromCompressedAudio(audioData)}else{sound=jsAudioCreateCompressedSoundClip(audioData,fmodSoundType)}WEBAudio.audioInstances[++WEBAudio.audioInstanceIdCounter]=sound;return WEBAudio.audioInstanceIdCounter}function jsAudioCreateUncompressedSoundClipFromPCM(channels,length,sampleRate,ptr){var buffer=WEBAudio.audioContext.createBuffer(channels,length,sampleRate);for(var i=0;i>2)+length*i;var copyToChannel=buffer["copyToChannel"]||function(source,channelNumber,startInChannel){var clipped=source.subarray(0,Math.min(source.length,this.length-(startInChannel|0)));this.getChannelData(channelNumber|0).set(clipped,startInChannel|0)};copyToChannel.apply(buffer,[HEAPF32.subarray(offs,offs+length),i,0])}return jsAudioCreateUncompressedSoundClip(buffer,false)}function _JS_Sound_Load_PCM(channels,length,sampleRate,ptr){if(WEBAudio.audioWebEnabled==0)return 0;var sound=jsAudioCreateUncompressedSoundClipFromPCM(channels,length,sampleRate,ptr);WEBAudio.audioInstances[++WEBAudio.audioInstanceIdCounter]=sound;return WEBAudio.audioInstanceIdCounter}function _JS_Sound_Play(bufferInstance,channelInstance,offset,delay){if(WEBAudio.audioWebEnabled==0)return;_JS_Sound_Stop(channelInstance,0);var soundClip=WEBAudio.audioInstances[bufferInstance];var channel=WEBAudio.audioInstances[channelInstance];if(!soundClip){console.log("Trying to play sound which is not loaded.");return}try{channel.playSoundClip(soundClip,WEBAudio.audioContext.currentTime+delay,offset)}catch(error){console.error("playSoundClip error. Exception: "+e)}}function _JS_Sound_ReleaseInstance(instance){var object=WEBAudio.audioInstances[instance];if(object){object.release()}delete WEBAudio.audioInstances[instance]}function _JS_Sound_ResumeIfNeeded(){if(WEBAudio.audioWebEnabled==0)return;if(WEBAudio.audioContext.state==="suspended")WEBAudio.audioContext.resume().catch(function(error){console.warn("Could not resume audio context. Exception: "+error)})}function _JS_Sound_Set3D(channelInstance,threeD){var channel=WEBAudio.audioInstances[channelInstance];channel.set3D(threeD)}function _JS_Sound_SetListenerOrientation(x,y,z,xUp,yUp,zUp){if(WEBAudio.audioWebEnabled==0)return;x=-x;y=-y;z=-z;var l=WEBAudio.audioContext.listener;if(l.forwardX){if(l.forwardX.value!==x)l.forwardX.value=x;if(l.forwardY.value!==y)l.forwardY.value=y;if(l.forwardZ.value!==z)l.forwardZ.value=z;if(l.upX.value!==xUp)l.upX.value=xUp;if(l.upY.value!==yUp)l.upY.value=yUp;if(l.upZ.value!==zUp)l.upZ.value=zUp}else if(l._forwardX!==x||l._forwardY!==y||l._forwardZ!==z||l._upX!==xUp||l._upY!==yUp||l._upZ!==zUp){l.setOrientation(x,y,z,xUp,yUp,zUp);l._forwardX=x;l._forwardY=y;l._forwardZ=z;l._upX=xUp;l._upY=yUp;l._upZ=zUp}}function _JS_Sound_SetListenerPosition(x,y,z){if(WEBAudio.audioWebEnabled==0)return;var l=WEBAudio.audioContext.listener;if(l.positionX){if(l.positionX.value!==x)l.positionX.value=x;if(l.positionY.value!==y)l.positionY.value=y;if(l.positionZ.value!==z)l.positionZ.value=z}else if(l._positionX!==x||l._positionY!==y||l._positionZ!==z){l.setPosition(x,y,z);l._positionX=x;l._positionY=y;l._positionZ=z}}function _JS_Sound_SetLoop(channelInstance,loop){if(WEBAudio.audioWebEnabled==0)return;var channel=WEBAudio.audioInstances[channelInstance];channel.setLoop(loop)}function _JS_Sound_SetLoopPoints(channelInstance,loopStart,loopEnd){if(WEBAudio.audioWebEnabled==0)return;var channel=WEBAudio.audioInstances[channelInstance];channel.setLoopPoints(loopStart,loopEnd)}function _JS_Sound_SetPaused(channelInstance,paused){if(WEBAudio.audioWebEnabled==0)return;var channel=WEBAudio.audioInstances[channelInstance];if(paused!=channel.isPaused()){if(paused)channel.pause();else channel.resume()}}function _JS_Sound_SetPitch(channelInstance,v){if(WEBAudio.audioWebEnabled==0)return;try{var channel=WEBAudio.audioInstances[channelInstance];channel.setPitch(v)}catch(e){console.error("JS_Sound_SetPitch(channel="+channelInstance+", pitch="+v+") threw an exception: "+e)}}function _JS_Sound_SetPosition(channelInstance,x,y,z){if(WEBAudio.audioWebEnabled==0)return;var channel=WEBAudio.audioInstances[channelInstance];channel.setPosition(x,y,z)}function _JS_Sound_SetVolume(channelInstance,v){if(WEBAudio.audioWebEnabled==0)return;try{var channel=WEBAudio.audioInstances[channelInstance];channel.setVolume(v)}catch(e){console.error("JS_Sound_SetVolume(channel="+channelInstance+", volume="+v+") threw an exception: "+e)}}function _JS_Sound_Stop(channelInstance,delay){if(WEBAudio.audioWebEnabled==0)return;var channel=WEBAudio.audioInstances[channelInstance];channel.stop(delay)}function _JS_SystemInfo_GetBrowserName(buffer,bufferSize){var browser=Module.SystemInfo.browser;if(buffer)stringToUTF8(browser,buffer,bufferSize);return lengthBytesUTF8(browser)}function _JS_SystemInfo_GetBrowserVersionString(buffer,bufferSize){var browserVer=Module.SystemInfo.browserVersion;if(buffer)stringToUTF8(browserVer,buffer,bufferSize);return lengthBytesUTF8(browserVer)}function _JS_SystemInfo_GetCanvasClientSize(domElementSelector,outWidth,outHeight){var selector=UTF8ToString(domElementSelector);var canvas=selector=="#canvas"?Module["canvas"]:document.querySelector(selector);var w=0,h=0;if(canvas){var size=canvas.getBoundingClientRect();w=size.width;h=size.height}HEAPF64[outWidth>>3]=w;HEAPF64[outHeight>>3]=h}function _JS_SystemInfo_GetDocumentURL(buffer,bufferSize){if(buffer)stringToUTF8(document.URL,buffer,bufferSize);return lengthBytesUTF8(document.URL)}function _JS_SystemInfo_GetGPUInfo(buffer,bufferSize){var gpuinfo=Module.SystemInfo.gpu;if(buffer)stringToUTF8(gpuinfo,buffer,bufferSize);return lengthBytesUTF8(gpuinfo)}function _JS_SystemInfo_GetLanguage(buffer,bufferSize){var language=Module.SystemInfo.language;if(buffer)stringToUTF8(language,buffer,bufferSize);return lengthBytesUTF8(language)}function _JS_SystemInfo_GetMatchWebGLToCanvasSize(){return Module.matchWebGLToCanvasSize||Module.matchWebGLToCanvasSize===undefined}function _JS_SystemInfo_GetMemory(){return HEAPU8.length/(1024*1024)}function _JS_SystemInfo_GetOS(buffer,bufferSize){var browser=Module.SystemInfo.os+" "+Module.SystemInfo.osVersion;if(buffer)stringToUTF8(browser,buffer,bufferSize);return lengthBytesUTF8(browser)}function _JS_SystemInfo_GetPreferredDevicePixelRatio(){return Module.matchWebGLToCanvasSize==false?1:Module.devicePixelRatio||window.devicePixelRatio||1}function _JS_SystemInfo_GetScreenSize(outWidth,outHeight){HEAPF64[outWidth>>3]=Module.SystemInfo.width;HEAPF64[outHeight>>3]=Module.SystemInfo.height}function _JS_SystemInfo_GetStreamingAssetsURL(buffer,bufferSize){if(buffer)stringToUTF8(Module.streamingAssetsUrl,buffer,bufferSize);return lengthBytesUTF8(Module.streamingAssetsUrl)}function _JS_SystemInfo_HasAstcHdr(){var ext=GLctx.getExtension("WEBGL_compressed_texture_astc");if(ext&&ext.getSupportedProfiles){return ext.getSupportedProfiles().includes("hdr")}return false}function _JS_SystemInfo_HasCursorLock(){return Module.SystemInfo.hasCursorLock}function _JS_SystemInfo_HasFullscreen(){return Module.SystemInfo.hasFullscreen}function _JS_SystemInfo_HasWebGL(){return Module.SystemInfo.hasWebGL}function _JS_UnityEngineShouldQuit(){return!!Module.shouldQuit}var wr={requests:{},responses:{},abortControllers:{},timer:{},nextRequestId:1};function _JS_WebRequest_Abort(requestId){var abortController=wr.abortControllers[requestId];if(!abortController||abortController.signal.aborted){return}abortController.abort()}function _JS_WebRequest_Create(url,method){var _url=UTF8ToString(url);var _method=UTF8ToString(method);var abortController=new AbortController;var requestOptions={url:_url,init:{method:_method,signal:abortController.signal,headers:{},enableStreamingDownload:true},tempBuffer:null,tempBufferSize:0};wr.abortControllers[wr.nextRequestId]=abortController;wr.requests[wr.nextRequestId]=requestOptions;return wr.nextRequestId++}function jsWebRequestGetResponseHeaderString(requestId){var response=wr.responses[requestId];if(!response){return""}if(response.headerString){return response.headerString}var headers="";var entries=response.headers.entries();for(var result=entries.next();!result.done;result=entries.next()){headers+=result.value[0]+": "+result.value[1]+"\r\n"}response.headerString=headers;return headers}function _JS_WebRequest_GetResponseMetaData(requestId,headerBuffer,headerSize,responseUrlBuffer,responseUrlSize){var response=wr.responses[requestId];if(!response){stringToUTF8("",headerBuffer,headerSize);stringToUTF8("",responseUrlBuffer,responseUrlSize);return}if(headerBuffer){var headers=jsWebRequestGetResponseHeaderString(requestId);stringToUTF8(headers,headerBuffer,headerSize)}if(responseUrlBuffer){stringToUTF8(response.url,responseUrlBuffer,responseUrlSize)}}function _JS_WebRequest_GetResponseMetaDataLengths(requestId,buffer){var response=wr.responses[requestId];if(!response){HEAPU32[buffer>>2]=0;HEAPU32[(buffer>>2)+1]=0;return}var headers=jsWebRequestGetResponseHeaderString(requestId);HEAPU32[buffer>>2]=lengthBytesUTF8(headers);HEAPU32[(buffer>>2)+1]=lengthBytesUTF8(response.url)}function _JS_WebRequest_Release(requestId){if(wr.timer[requestId]){clearTimeout(wr.timer[requestId])}delete wr.requests[requestId];delete wr.responses[requestId];delete wr.abortControllers[requestId];delete wr.timer[requestId]}function _JS_WebRequest_Send(requestId,ptr,length,arg,onresponse,onprogress){var requestOptions=wr.requests[requestId];var abortController=wr.abortControllers[requestId];function getTempBuffer(size){if(!requestOptions.tempBuffer){const initialSize=Math.max(size,1024);requestOptions.tempBuffer=_malloc(initialSize);requestOptions.tempBufferSize=initialSize}if(requestOptions.tempBufferSize0){var postData=HEAPU8.subarray(ptr,ptr+length);requestOptions.init.body=new Blob([postData])}if(requestOptions.timeout){wr.timer[requestId]=setTimeout(function(){requestOptions.isTimedOut=true;abortController.abort()},requestOptions.timeout)}var fetchImpl=Module.fetchWithProgress;requestOptions.init.onProgress=HandleProgress;if(Module.companyName&&Module.productName&&Module.cachedFetch){fetchImpl=Module.cachedFetch;requestOptions.init.companyName=Module.companyName;requestOptions.init.productName=Module.productName;requestOptions.init.productVersion=Module.productVersion;requestOptions.init.control=Module.cacheControl(requestOptions.url)}fetchImpl(requestOptions.url,requestOptions.init).then(function(response){wr.responses[requestId]=response;HandleSuccess(response,response.parsedBody)}).catch(function(error){var kWebErrorUnknown=2;var kWebErrorAborted=17;var kWebErrorTimeout=14;if(requestOptions.isTimedOut){HandleError("Connection timed out.",kWebErrorTimeout)}else if(abortController.signal.aborted){HandleError("Aborted.",kWebErrorAborted)}else{HandleError(error.message,kWebErrorUnknown)}})}catch(error){var kWebErrorUnknown=2;HandleError(error.message,kWebErrorUnknown)}}function _JS_WebRequest_SetRedirectLimit(request,redirectLimit){var requestOptions=wr.requests[request];if(!requestOptions){return}requestOptions.init.redirect=redirectLimit===0?"error":"follow"}function _JS_WebRequest_SetRequestHeader(requestId,header,value){var requestOptions=wr.requests[requestId];if(!requestOptions){return}var _header=UTF8ToString(header);var _value=UTF8ToString(value);requestOptions.init.headers[_header]=_value}function _JS_WebRequest_SetTimeout(requestId,timeout){var requestOptions=wr.requests[requestId];if(!requestOptions){return}requestOptions.timeout=timeout}function ___cxa_allocate_exception(size){return _malloc(size+16)+16}function ExceptionInfo(excPtr){this.excPtr=excPtr;this.ptr=excPtr-16;this.set_type=function(type){HEAP32[this.ptr+4>>2]=type};this.get_type=function(){return HEAP32[this.ptr+4>>2]};this.set_destructor=function(destructor){HEAP32[this.ptr+8>>2]=destructor};this.get_destructor=function(){return HEAP32[this.ptr+8>>2]};this.set_refcount=function(refcount){HEAP32[this.ptr>>2]=refcount};this.set_caught=function(caught){caught=caught?1:0;HEAP8[this.ptr+12>>0]=caught};this.get_caught=function(){return HEAP8[this.ptr+12>>0]!=0};this.set_rethrown=function(rethrown){rethrown=rethrown?1:0;HEAP8[this.ptr+13>>0]=rethrown};this.get_rethrown=function(){return HEAP8[this.ptr+13>>0]!=0};this.init=function(type,destructor){this.set_type(type);this.set_destructor(destructor);this.set_refcount(0);this.set_caught(false);this.set_rethrown(false)};this.add_ref=function(){var value=HEAP32[this.ptr>>2];HEAP32[this.ptr>>2]=value+1};this.release_ref=function(){var prev=HEAP32[this.ptr>>2];HEAP32[this.ptr>>2]=prev-1;return prev===1}}function CatchInfo(ptr){this.free=function(){_free(this.ptr);this.ptr=0};this.set_base_ptr=function(basePtr){HEAP32[this.ptr>>2]=basePtr};this.get_base_ptr=function(){return HEAP32[this.ptr>>2]};this.set_adjusted_ptr=function(adjustedPtr){HEAP32[this.ptr+4>>2]=adjustedPtr};this.get_adjusted_ptr_addr=function(){return this.ptr+4};this.get_adjusted_ptr=function(){return HEAP32[this.ptr+4>>2]};this.get_exception_ptr=function(){var isPointer=___cxa_is_pointer_type(this.get_exception_info().get_type());if(isPointer){return HEAP32[this.get_base_ptr()>>2]}var adjusted=this.get_adjusted_ptr();if(adjusted!==0)return adjusted;return this.get_base_ptr()};this.get_exception_info=function(){return new ExceptionInfo(this.get_base_ptr())};if(ptr===undefined){this.ptr=_malloc(8);this.set_adjusted_ptr(0)}else{this.ptr=ptr}}var exceptionCaught=[];function exception_addRef(info){info.add_ref()}var uncaughtExceptionCount=0;function ___cxa_begin_catch(ptr){var catchInfo=new CatchInfo(ptr);var info=catchInfo.get_exception_info();if(!info.get_caught()){info.set_caught(true);uncaughtExceptionCount--}info.set_rethrown(false);exceptionCaught.push(catchInfo);exception_addRef(info);return catchInfo.get_exception_ptr()}var exceptionLast=0;function ___cxa_free_exception(ptr){return _free(new ExceptionInfo(ptr).ptr)}function exception_decRef(info){if(info.release_ref()&&!info.get_rethrown()){var destructor=info.get_destructor();if(destructor){(function(a1){return dynCall_ii.apply(null,[destructor,a1])})(info.excPtr)}___cxa_free_exception(info.excPtr)}}function ___cxa_end_catch(){_setThrew(0);var catchInfo=exceptionCaught.pop();exception_decRef(catchInfo.get_exception_info());catchInfo.free();exceptionLast=0}function ___resumeException(catchInfoPtr){var catchInfo=new CatchInfo(catchInfoPtr);var ptr=catchInfo.get_base_ptr();if(!exceptionLast){exceptionLast=ptr}catchInfo.free();throw ptr}function ___cxa_find_matching_catch_2(){var thrown=exceptionLast;if(!thrown){setTempRet0(0);return 0|0}var info=new ExceptionInfo(thrown);var thrownType=info.get_type();var catchInfo=new CatchInfo;catchInfo.set_base_ptr(thrown);catchInfo.set_adjusted_ptr(thrown);if(!thrownType){setTempRet0(0);return catchInfo.ptr|0}var typeArray=Array.prototype.slice.call(arguments);for(var i=0;i=0;i--){var last=parts[i];if(last==="."){parts.splice(i,1)}else if(last===".."){parts.splice(i,1);up++}else if(up){parts.splice(i,1);up--}}if(allowAboveRoot){for(;up;up--){parts.unshift("..")}}return parts},normalize:function(path){var isAbsolute=path.charAt(0)==="/",trailingSlash=path.substr(-1)==="/";path=PATH.normalizeArray(path.split("/").filter(function(p){return!!p}),!isAbsolute).join("/");if(!path&&!isAbsolute){path="."}if(path&&trailingSlash){path+="/"}return(isAbsolute?"/":"")+path},dirname:function(path){var result=PATH.splitPath(path),root=result[0],dir=result[1];if(!root&&!dir){return"."}if(dir){dir=dir.substr(0,dir.length-1)}return root+dir},basename:function(path){if(path==="/")return"/";path=PATH.normalize(path);path=path.replace(/\/$/,"");var lastSlash=path.lastIndexOf("/");if(lastSlash===-1)return path;return path.substr(lastSlash+1)},extname:function(path){return PATH.splitPath(path)[3]},join:function(){var paths=Array.prototype.slice.call(arguments,0);return PATH.normalize(paths.join("/"))},join2:function(l,r){return PATH.normalize(l+"/"+r)}};function getRandomDevice(){if(typeof crypto=="object"&&typeof crypto["getRandomValues"]=="function"){var randomBuffer=new Uint8Array(1);return function(){crypto.getRandomValues(randomBuffer);return randomBuffer[0]}}else if(ENVIRONMENT_IS_NODE){try{var crypto_module=require("crypto");return function(){return crypto_module["randomBytes"](1)[0]}}catch(e){}}return function(){abort("randomDevice")}}var PATH_FS={resolve:function(){var resolvedPath="",resolvedAbsolute=false;for(var i=arguments.length-1;i>=-1&&!resolvedAbsolute;i--){var path=i>=0?arguments[i]:FS.cwd();if(typeof path!="string"){throw new TypeError("Arguments to path.resolve must be strings")}else if(!path){return""}resolvedPath=path+"/"+resolvedPath;resolvedAbsolute=path.charAt(0)==="/"}resolvedPath=PATH.normalizeArray(resolvedPath.split("/").filter(function(p){return!!p}),!resolvedAbsolute).join("/");return(resolvedAbsolute?"/":"")+resolvedPath||"."},relative:function(from,to){from=PATH_FS.resolve(from).substr(1);to=PATH_FS.resolve(to).substr(1);function trim(arr){var start=0;for(;start=0;end--){if(arr[end]!=="")break}if(start>end)return[];return arr.slice(start,end-start+1)}var fromParts=trim(from.split("/"));var toParts=trim(to.split("/"));var length=Math.min(fromParts.length,toParts.length);var samePartsLength=length;for(var i=0;i0){result=buf.slice(0,bytesRead).toString("utf-8")}else{result=null}}else if(typeof window!="undefined"&&typeof window.prompt=="function"){result=window.prompt("Input: ");if(result!==null){result+="\n"}}else if(typeof readline=="function"){result=readline();if(result!==null){result+="\n"}}if(!result){return null}tty.input=intArrayFromString(result,true)}return tty.input.shift()},put_char:function(tty,val){if(val===null||val===10){out(UTF8ArrayToString(tty.output,0));tty.output=[]}else{if(val!=0)tty.output.push(val)}},flush:function(tty){if(tty.output&&tty.output.length>0){out(UTF8ArrayToString(tty.output,0));tty.output=[]}}},default_tty1_ops:{put_char:function(tty,val){if(val===null||val===10){err(UTF8ArrayToString(tty.output,0));tty.output=[]}else{if(val!=0)tty.output.push(val)}},flush:function(tty){if(tty.output&&tty.output.length>0){err(UTF8ArrayToString(tty.output,0));tty.output=[]}}}};function zeroMemory(address,size){HEAPU8.fill(0,address,address+size)}function alignMemory(size,alignment){return Math.ceil(size/alignment)*alignment}function mmapAlloc(size){size=alignMemory(size,65536);var ptr=_emscripten_builtin_memalign(65536,size);if(!ptr)return 0;zeroMemory(ptr,size);return ptr}var MEMFS={ops_table:null,mount:function(mount){return MEMFS.createNode(null,"/",16384|511,0)},createNode:function(parent,name,mode,dev){if(FS.isBlkdev(mode)||FS.isFIFO(mode)){throw new FS.ErrnoError(63)}if(!MEMFS.ops_table){MEMFS.ops_table={dir:{node:{getattr:MEMFS.node_ops.getattr,setattr:MEMFS.node_ops.setattr,lookup:MEMFS.node_ops.lookup,mknod:MEMFS.node_ops.mknod,rename:MEMFS.node_ops.rename,unlink:MEMFS.node_ops.unlink,rmdir:MEMFS.node_ops.rmdir,readdir:MEMFS.node_ops.readdir,symlink:MEMFS.node_ops.symlink},stream:{llseek:MEMFS.stream_ops.llseek}},file:{node:{getattr:MEMFS.node_ops.getattr,setattr:MEMFS.node_ops.setattr},stream:{llseek:MEMFS.stream_ops.llseek,read:MEMFS.stream_ops.read,write:MEMFS.stream_ops.write,allocate:MEMFS.stream_ops.allocate,mmap:MEMFS.stream_ops.mmap,msync:MEMFS.stream_ops.msync}},link:{node:{getattr:MEMFS.node_ops.getattr,setattr:MEMFS.node_ops.setattr,readlink:MEMFS.node_ops.readlink},stream:{}},chrdev:{node:{getattr:MEMFS.node_ops.getattr,setattr:MEMFS.node_ops.setattr},stream:FS.chrdev_stream_ops}}}var node=FS.createNode(parent,name,mode,dev);if(FS.isDir(node.mode)){node.node_ops=MEMFS.ops_table.dir.node;node.stream_ops=MEMFS.ops_table.dir.stream;node.contents={}}else if(FS.isFile(node.mode)){node.node_ops=MEMFS.ops_table.file.node;node.stream_ops=MEMFS.ops_table.file.stream;node.usedBytes=0;node.contents=null}else if(FS.isLink(node.mode)){node.node_ops=MEMFS.ops_table.link.node;node.stream_ops=MEMFS.ops_table.link.stream}else if(FS.isChrdev(node.mode)){node.node_ops=MEMFS.ops_table.chrdev.node;node.stream_ops=MEMFS.ops_table.chrdev.stream}node.timestamp=Date.now();if(parent){parent.contents[name]=node;parent.timestamp=node.timestamp}return node},getFileDataAsTypedArray:function(node){if(!node.contents)return new Uint8Array(0);if(node.contents.subarray)return node.contents.subarray(0,node.usedBytes);return new Uint8Array(node.contents)},expandFileStorage:function(node,newCapacity){var prevCapacity=node.contents?node.contents.length:0;if(prevCapacity>=newCapacity)return;var CAPACITY_DOUBLING_MAX=1024*1024;newCapacity=Math.max(newCapacity,prevCapacity*(prevCapacity>>0);if(prevCapacity!=0)newCapacity=Math.max(newCapacity,256);var oldContents=node.contents;node.contents=new Uint8Array(newCapacity);if(node.usedBytes>0)node.contents.set(oldContents.subarray(0,node.usedBytes),0)},resizeFileStorage:function(node,newSize){if(node.usedBytes==newSize)return;if(newSize==0){node.contents=null;node.usedBytes=0}else{var oldContents=node.contents;node.contents=new Uint8Array(newSize);if(oldContents){node.contents.set(oldContents.subarray(0,Math.min(newSize,node.usedBytes)))}node.usedBytes=newSize}},node_ops:{getattr:function(node){var attr={};attr.dev=FS.isChrdev(node.mode)?node.id:1;attr.ino=node.id;attr.mode=node.mode;attr.nlink=1;attr.uid=0;attr.gid=0;attr.rdev=node.rdev;if(FS.isDir(node.mode)){attr.size=4096}else if(FS.isFile(node.mode)){attr.size=node.usedBytes}else if(FS.isLink(node.mode)){attr.size=node.link.length}else{attr.size=0}attr.atime=new Date(node.timestamp);attr.mtime=new Date(node.timestamp);attr.ctime=new Date(node.timestamp);attr.blksize=4096;attr.blocks=Math.ceil(attr.size/attr.blksize);return attr},setattr:function(node,attr){if(attr.mode!==undefined){node.mode=attr.mode}if(attr.timestamp!==undefined){node.timestamp=attr.timestamp}if(attr.size!==undefined){MEMFS.resizeFileStorage(node,attr.size)}},lookup:function(parent,name){throw FS.genericErrors[44]},mknod:function(parent,name,mode,dev){return MEMFS.createNode(parent,name,mode,dev)},rename:function(old_node,new_dir,new_name){if(FS.isDir(old_node.mode)){var new_node;try{new_node=FS.lookupNode(new_dir,new_name)}catch(e){}if(new_node){for(var i in new_node.contents){throw new FS.ErrnoError(55)}}}delete old_node.parent.contents[old_node.name];old_node.parent.timestamp=Date.now();old_node.name=new_name;new_dir.contents[new_name]=old_node;new_dir.timestamp=old_node.parent.timestamp;old_node.parent=new_dir},unlink:function(parent,name){delete parent.contents[name];parent.timestamp=Date.now()},rmdir:function(parent,name){var node=FS.lookupNode(parent,name);for(var i in node.contents){throw new FS.ErrnoError(55)}delete parent.contents[name];parent.timestamp=Date.now()},readdir:function(node){var entries=[".",".."];for(var key in node.contents){if(!node.contents.hasOwnProperty(key)){continue}entries.push(key)}return entries},symlink:function(parent,newname,oldpath){var node=MEMFS.createNode(parent,newname,511|40960,0);node.link=oldpath;return node},readlink:function(node){if(!FS.isLink(node.mode)){throw new FS.ErrnoError(28)}return node.link}},stream_ops:{read:function(stream,buffer,offset,length,position){var contents=stream.node.contents;if(position>=stream.node.usedBytes)return 0;var size=Math.min(stream.node.usedBytes-position,length);if(size>8&&contents.subarray){buffer.set(contents.subarray(position,position+size),offset)}else{for(var i=0;i0||position+length{if(typeof indexedDB!="undefined")return indexedDB;var ret=null;if(typeof window=="object")ret=window.indexedDB||window.mozIndexedDB||window.webkitIndexedDB||window.msIndexedDB;assert(ret,"IDBFS used, but indexedDB not supported");return ret},DB_VERSION:21,DB_STORE_NAME:"FILE_DATA",mount:function(mount){return MEMFS.mount.apply(null,arguments)},syncfs:(mount,populate,callback)=>{IDBFS.getLocalSet(mount,(err,local)=>{if(err)return callback(err);IDBFS.getRemoteSet(mount,(err,remote)=>{if(err)return callback(err);var src=populate?remote:local;var dst=populate?local:remote;IDBFS.reconcile(src,dst,callback)})})},getDB:(name,callback)=>{var db=IDBFS.dbs[name];if(db){return callback(null,db)}var req;try{req=IDBFS.indexedDB().open(name,IDBFS.DB_VERSION)}catch(e){return callback(e)}if(!req){return callback("Unable to connect to IndexedDB")}req.onupgradeneeded=(e=>{var db=e.target.result;var transaction=e.target.transaction;var fileStore;if(db.objectStoreNames.contains(IDBFS.DB_STORE_NAME)){fileStore=transaction.objectStore(IDBFS.DB_STORE_NAME)}else{fileStore=db.createObjectStore(IDBFS.DB_STORE_NAME)}if(!fileStore.indexNames.contains("timestamp")){fileStore.createIndex("timestamp","timestamp",{unique:false})}});req.onsuccess=(()=>{db=req.result;IDBFS.dbs[name]=db;callback(null,db)});req.onerror=(e=>{callback(this.error);e.preventDefault()})},getLocalSet:(mount,callback)=>{var entries={};function isRealDir(p){return p!=="."&&p!==".."}function toAbsolute(root){return p=>{return PATH.join2(root,p)}}var check=FS.readdir(mount.mountpoint).filter(isRealDir).map(toAbsolute(mount.mountpoint));while(check.length){var path=check.pop();var stat;try{stat=FS.stat(path)}catch(e){return callback(e)}if(FS.isDir(stat.mode)){check.push.apply(check,FS.readdir(path).filter(isRealDir).map(toAbsolute(path)))}entries[path]={"timestamp":stat.mtime}}return callback(null,{type:"local",entries:entries})},getRemoteSet:(mount,callback)=>{var entries={};IDBFS.getDB(mount.mountpoint,(err,db)=>{if(err)return callback(err);try{var transaction=db.transaction([IDBFS.DB_STORE_NAME],"readonly");transaction.onerror=(e=>{callback(this.error);e.preventDefault()});var store=transaction.objectStore(IDBFS.DB_STORE_NAME);var index=store.index("timestamp");index.openKeyCursor().onsuccess=(event=>{var cursor=event.target.result;if(!cursor){return callback(null,{type:"remote",db:db,entries:entries})}entries[cursor.primaryKey]={"timestamp":cursor.key};cursor.continue()})}catch(e){return callback(e)}})},loadLocalEntry:(path,callback)=>{var stat,node;try{var lookup=FS.lookupPath(path);node=lookup.node;stat=FS.stat(path)}catch(e){return callback(e)}if(FS.isDir(stat.mode)){return callback(null,{"timestamp":stat.mtime,"mode":stat.mode})}else if(FS.isFile(stat.mode)){node.contents=MEMFS.getFileDataAsTypedArray(node);return callback(null,{"timestamp":stat.mtime,"mode":stat.mode,"contents":node.contents})}else{return callback(new Error("node type not supported"))}},storeLocalEntry:(path,entry,callback)=>{try{if(FS.isDir(entry["mode"])){FS.mkdirTree(path,entry["mode"])}else if(FS.isFile(entry["mode"])){FS.writeFile(path,entry["contents"],{canOwn:true})}else{return callback(new Error("node type not supported"))}FS.chmod(path,entry["mode"]);FS.utime(path,entry["timestamp"],entry["timestamp"])}catch(e){return callback(e)}callback(null)},removeLocalEntry:(path,callback)=>{try{var lookup=FS.lookupPath(path);var stat=FS.stat(path);if(FS.isDir(stat.mode)){FS.rmdir(path)}else if(FS.isFile(stat.mode)){FS.unlink(path)}}catch(e){return callback(e)}callback(null)},loadRemoteEntry:(store,path,callback)=>{var req=store.get(path);req.onsuccess=(event=>{callback(null,event.target.result)});req.onerror=(e=>{callback(this.error);e.preventDefault()})},storeRemoteEntry:(store,path,entry,callback)=>{try{var req=store.put(entry,path)}catch(e){callback(e);return}req.onsuccess=(()=>{callback(null)});req.onerror=(e=>{callback(this.error);e.preventDefault()})},removeRemoteEntry:(store,path,callback)=>{var req=store.delete(path);req.onsuccess=(()=>{callback(null)});req.onerror=(e=>{callback(this.error);e.preventDefault()})},reconcile:(src,dst,callback)=>{var total=0;var create=[];Object.keys(src.entries).forEach(function(key){var e=src.entries[key];var e2=dst.entries[key];if(!e2||e["timestamp"].getTime()!=e2["timestamp"].getTime()){create.push(key);total++}});var remove=[];Object.keys(dst.entries).forEach(function(key){if(!src.entries[key]){remove.push(key);total++}});if(!total){return callback(null)}var errored=false;var db=src.type==="remote"?src.db:dst.db;var transaction=db.transaction([IDBFS.DB_STORE_NAME],"readwrite");var store=transaction.objectStore(IDBFS.DB_STORE_NAME);function done(err){if(err&&!errored){errored=true;return callback(err)}}transaction.onerror=(e=>{done(this.error);e.preventDefault()});transaction.oncomplete=(e=>{if(!errored){callback(null)}});create.sort().forEach(path=>{if(dst.type==="local"){IDBFS.loadRemoteEntry(store,path,(err,entry)=>{if(err)return done(err);IDBFS.storeLocalEntry(path,entry,done)})}else{IDBFS.loadLocalEntry(path,(err,entry)=>{if(err)return done(err);IDBFS.storeRemoteEntry(store,path,entry,done)})}});remove.sort().reverse().forEach(path=>{if(dst.type==="local"){IDBFS.removeLocalEntry(path,done)}else{IDBFS.removeRemoteEntry(store,path,done)}})}};var FS={root:null,mounts:[],devices:{},streams:[],nextInode:1,nameTable:null,currentPath:"/",initialized:false,ignorePermissions:true,ErrnoError:null,genericErrors:{},filesystems:null,syncFSRequests:0,lookupPath:(path,opts={})=>{path=PATH_FS.resolve(FS.cwd(),path);if(!path)return{path:"",node:null};var defaults={follow_mount:true,recurse_count:0};opts=Object.assign(defaults,opts);if(opts.recurse_count>8){throw new FS.ErrnoError(32)}var parts=PATH.normalizeArray(path.split("/").filter(p=>!!p),false);var current=FS.root;var current_path="/";for(var i=0;i40){throw new FS.ErrnoError(32)}}}}return{path:current_path,node:current}},getPath:node=>{var path;while(true){if(FS.isRoot(node)){var mount=node.mount.mountpoint;if(!path)return mount;return mount[mount.length-1]!=="/"?mount+"/"+path:mount+path}path=path?node.name+"/"+path:node.name;node=node.parent}},hashName:(parentid,name)=>{var hash=0;for(var i=0;i>>0)%FS.nameTable.length},hashAddNode:node=>{var hash=FS.hashName(node.parent.id,node.name);node.name_next=FS.nameTable[hash];FS.nameTable[hash]=node},hashRemoveNode:node=>{var hash=FS.hashName(node.parent.id,node.name);if(FS.nameTable[hash]===node){FS.nameTable[hash]=node.name_next}else{var current=FS.nameTable[hash];while(current){if(current.name_next===node){current.name_next=node.name_next;break}current=current.name_next}}},lookupNode:(parent,name)=>{var errCode=FS.mayLookup(parent);if(errCode){throw new FS.ErrnoError(errCode,parent)}var hash=FS.hashName(parent.id,name);for(var node=FS.nameTable[hash];node;node=node.name_next){var nodeName=node.name;if(node.parent.id===parent.id&&nodeName===name){return node}}return FS.lookup(parent,name)},createNode:(parent,name,mode,rdev)=>{var node=new FS.FSNode(parent,name,mode,rdev);FS.hashAddNode(node);return node},destroyNode:node=>{FS.hashRemoveNode(node)},isRoot:node=>{return node===node.parent},isMountpoint:node=>{return!!node.mounted},isFile:mode=>{return(mode&61440)===32768},isDir:mode=>{return(mode&61440)===16384},isLink:mode=>{return(mode&61440)===40960},isChrdev:mode=>{return(mode&61440)===8192},isBlkdev:mode=>{return(mode&61440)===24576},isFIFO:mode=>{return(mode&61440)===4096},isSocket:mode=>{return(mode&49152)===49152},flagModes:{"r":0,"r+":2,"w":577,"w+":578,"a":1089,"a+":1090},modeStringToFlags:str=>{var flags=FS.flagModes[str];if(typeof flags=="undefined"){throw new Error("Unknown file open mode: "+str)}return flags},flagsToPermissionString:flag=>{var perms=["r","w","rw"][flag&3];if(flag&512){perms+="w"}return perms},nodePermissions:(node,perms)=>{if(FS.ignorePermissions){return 0}if(perms.includes("r")&&!(node.mode&292)){return 2}else if(perms.includes("w")&&!(node.mode&146)){return 2}else if(perms.includes("x")&&!(node.mode&73)){return 2}return 0},mayLookup:dir=>{var errCode=FS.nodePermissions(dir,"x");if(errCode)return errCode;if(!dir.node_ops.lookup)return 2;return 0},mayCreate:(dir,name)=>{try{var node=FS.lookupNode(dir,name);return 20}catch(e){}return FS.nodePermissions(dir,"wx")},mayDelete:(dir,name,isdir)=>{var node;try{node=FS.lookupNode(dir,name)}catch(e){return e.errno}var errCode=FS.nodePermissions(dir,"wx");if(errCode){return errCode}if(isdir){if(!FS.isDir(node.mode)){return 54}if(FS.isRoot(node)||FS.getPath(node)===FS.cwd()){return 10}}else{if(FS.isDir(node.mode)){return 31}}return 0},mayOpen:(node,flags)=>{if(!node){return 44}if(FS.isLink(node.mode)){return 32}else if(FS.isDir(node.mode)){if(FS.flagsToPermissionString(flags)!=="r"||flags&512){return 31}}return FS.nodePermissions(node,FS.flagsToPermissionString(flags))},MAX_OPEN_FDS:4096,nextfd:(fd_start=0,fd_end=FS.MAX_OPEN_FDS)=>{for(var fd=fd_start;fd<=fd_end;fd++){if(!FS.streams[fd]){return fd}}throw new FS.ErrnoError(33)},getStream:fd=>FS.streams[fd],createStream:(stream,fd_start,fd_end)=>{if(!FS.FSStream){FS.FSStream=function(){};FS.FSStream.prototype={object:{get:function(){return this.node},set:function(val){this.node=val}},isRead:{get:function(){return(this.flags&2097155)!==1}},isWrite:{get:function(){return(this.flags&2097155)!==0}},isAppend:{get:function(){return this.flags&1024}}}}stream=Object.assign(new FS.FSStream,stream);var fd=FS.nextfd(fd_start,fd_end);stream.fd=fd;FS.streams[fd]=stream;return stream},closeStream:fd=>{FS.streams[fd]=null},chrdev_stream_ops:{open:stream=>{var device=FS.getDevice(stream.node.rdev);stream.stream_ops=device.stream_ops;if(stream.stream_ops.open){stream.stream_ops.open(stream)}},llseek:()=>{throw new FS.ErrnoError(70)}},major:dev=>dev>>8,minor:dev=>dev&255,makedev:(ma,mi)=>ma<<8|mi,registerDevice:(dev,ops)=>{FS.devices[dev]={stream_ops:ops}},getDevice:dev=>FS.devices[dev],getMounts:mount=>{var mounts=[];var check=[mount];while(check.length){var m=check.pop();mounts.push(m);check.push.apply(check,m.mounts)}return mounts},syncfs:(populate,callback)=>{if(typeof populate=="function"){callback=populate;populate=false}FS.syncFSRequests++;if(FS.syncFSRequests>1){err("warning: "+FS.syncFSRequests+" FS.syncfs operations in flight at once, probably just doing extra work")}var mounts=FS.getMounts(FS.root.mount);var completed=0;function doCallback(errCode){FS.syncFSRequests--;return callback(errCode)}function done(errCode){if(errCode){if(!done.errored){done.errored=true;return doCallback(errCode)}return}if(++completed>=mounts.length){doCallback(null)}}mounts.forEach(mount=>{if(!mount.type.syncfs){return done(null)}mount.type.syncfs(mount,populate,done)})},mount:(type,opts,mountpoint)=>{var root=mountpoint==="/";var pseudo=!mountpoint;var node;if(root&&FS.root){throw new FS.ErrnoError(10)}else if(!root&&!pseudo){var lookup=FS.lookupPath(mountpoint,{follow_mount:false});mountpoint=lookup.path;node=lookup.node;if(FS.isMountpoint(node)){throw new FS.ErrnoError(10)}if(!FS.isDir(node.mode)){throw new FS.ErrnoError(54)}}var mount={type:type,opts:opts,mountpoint:mountpoint,mounts:[]};var mountRoot=type.mount(mount);mountRoot.mount=mount;mount.root=mountRoot;if(root){FS.root=mountRoot}else if(node){node.mounted=mount;if(node.mount){node.mount.mounts.push(mount)}}return mountRoot},unmount:mountpoint=>{var lookup=FS.lookupPath(mountpoint,{follow_mount:false});if(!FS.isMountpoint(lookup.node)){throw new FS.ErrnoError(28)}var node=lookup.node;var mount=node.mounted;var mounts=FS.getMounts(mount);Object.keys(FS.nameTable).forEach(hash=>{var current=FS.nameTable[hash];while(current){var next=current.name_next;if(mounts.includes(current.mount)){FS.destroyNode(current)}current=next}});node.mounted=null;var idx=node.mount.mounts.indexOf(mount);node.mount.mounts.splice(idx,1)},lookup:(parent,name)=>{return parent.node_ops.lookup(parent,name)},mknod:(path,mode,dev)=>{var lookup=FS.lookupPath(path,{parent:true});var parent=lookup.node;var name=PATH.basename(path);if(!name||name==="."||name===".."){throw new FS.ErrnoError(28)}var errCode=FS.mayCreate(parent,name);if(errCode){throw new FS.ErrnoError(errCode)}if(!parent.node_ops.mknod){throw new FS.ErrnoError(63)}return parent.node_ops.mknod(parent,name,mode,dev)},create:(path,mode)=>{mode=mode!==undefined?mode:438;mode&=4095;mode|=32768;return FS.mknod(path,mode,0)},mkdir:(path,mode)=>{mode=mode!==undefined?mode:511;mode&=511|512;mode|=16384;return FS.mknod(path,mode,0)},mkdirTree:(path,mode)=>{var dirs=path.split("/");var d="";for(var i=0;i{if(typeof dev=="undefined"){dev=mode;mode=438}mode|=8192;return FS.mknod(path,mode,dev)},symlink:(oldpath,newpath)=>{if(!PATH_FS.resolve(oldpath)){throw new FS.ErrnoError(44)}var lookup=FS.lookupPath(newpath,{parent:true});var parent=lookup.node;if(!parent){throw new FS.ErrnoError(44)}var newname=PATH.basename(newpath);var errCode=FS.mayCreate(parent,newname);if(errCode){throw new FS.ErrnoError(errCode)}if(!parent.node_ops.symlink){throw new FS.ErrnoError(63)}return parent.node_ops.symlink(parent,newname,oldpath)},rename:(old_path,new_path)=>{var old_dirname=PATH.dirname(old_path);var new_dirname=PATH.dirname(new_path);var old_name=PATH.basename(old_path);var new_name=PATH.basename(new_path);var lookup,old_dir,new_dir;lookup=FS.lookupPath(old_path,{parent:true});old_dir=lookup.node;lookup=FS.lookupPath(new_path,{parent:true});new_dir=lookup.node;if(!old_dir||!new_dir)throw new FS.ErrnoError(44);if(old_dir.mount!==new_dir.mount){throw new FS.ErrnoError(75)}var old_node=FS.lookupNode(old_dir,old_name);var relative=PATH_FS.relative(old_path,new_dirname);if(relative.charAt(0)!=="."){throw new FS.ErrnoError(28)}relative=PATH_FS.relative(new_path,old_dirname);if(relative.charAt(0)!=="."){throw new FS.ErrnoError(55)}var new_node;try{new_node=FS.lookupNode(new_dir,new_name)}catch(e){}if(old_node===new_node){return}var isdir=FS.isDir(old_node.mode);var errCode=FS.mayDelete(old_dir,old_name,isdir);if(errCode){throw new FS.ErrnoError(errCode)}errCode=new_node?FS.mayDelete(new_dir,new_name,isdir):FS.mayCreate(new_dir,new_name);if(errCode){throw new FS.ErrnoError(errCode)}if(!old_dir.node_ops.rename){throw new FS.ErrnoError(63)}if(FS.isMountpoint(old_node)||new_node&&FS.isMountpoint(new_node)){throw new FS.ErrnoError(10)}if(new_dir!==old_dir){errCode=FS.nodePermissions(old_dir,"w");if(errCode){throw new FS.ErrnoError(errCode)}}FS.hashRemoveNode(old_node);try{old_dir.node_ops.rename(old_node,new_dir,new_name)}catch(e){throw e}finally{FS.hashAddNode(old_node)}},rmdir:path=>{var lookup=FS.lookupPath(path,{parent:true});var parent=lookup.node;var name=PATH.basename(path);var node=FS.lookupNode(parent,name);var errCode=FS.mayDelete(parent,name,true);if(errCode){throw new FS.ErrnoError(errCode)}if(!parent.node_ops.rmdir){throw new FS.ErrnoError(63)}if(FS.isMountpoint(node)){throw new FS.ErrnoError(10)}parent.node_ops.rmdir(parent,name);FS.destroyNode(node)},readdir:path=>{var lookup=FS.lookupPath(path,{follow:true});var node=lookup.node;if(!node.node_ops.readdir){throw new FS.ErrnoError(54)}return node.node_ops.readdir(node)},unlink:path=>{var lookup=FS.lookupPath(path,{parent:true});var parent=lookup.node;if(!parent){throw new FS.ErrnoError(44)}var name=PATH.basename(path);var node=FS.lookupNode(parent,name);var errCode=FS.mayDelete(parent,name,false);if(errCode){throw new FS.ErrnoError(errCode)}if(!parent.node_ops.unlink){throw new FS.ErrnoError(63)}if(FS.isMountpoint(node)){throw new FS.ErrnoError(10)}parent.node_ops.unlink(parent,name);FS.destroyNode(node)},readlink:path=>{var lookup=FS.lookupPath(path);var link=lookup.node;if(!link){throw new FS.ErrnoError(44)}if(!link.node_ops.readlink){throw new FS.ErrnoError(28)}return PATH_FS.resolve(FS.getPath(link.parent),link.node_ops.readlink(link))},stat:(path,dontFollow)=>{var lookup=FS.lookupPath(path,{follow:!dontFollow});var node=lookup.node;if(!node){throw new FS.ErrnoError(44)}if(!node.node_ops.getattr){throw new FS.ErrnoError(63)}return node.node_ops.getattr(node)},lstat:path=>{return FS.stat(path,true)},chmod:(path,mode,dontFollow)=>{var node;if(typeof path=="string"){var lookup=FS.lookupPath(path,{follow:!dontFollow});node=lookup.node}else{node=path}if(!node.node_ops.setattr){throw new FS.ErrnoError(63)}node.node_ops.setattr(node,{mode:mode&4095|node.mode&~4095,timestamp:Date.now()})},lchmod:(path,mode)=>{FS.chmod(path,mode,true)},fchmod:(fd,mode)=>{var stream=FS.getStream(fd);if(!stream){throw new FS.ErrnoError(8)}FS.chmod(stream.node,mode)},chown:(path,uid,gid,dontFollow)=>{var node;if(typeof path=="string"){var lookup=FS.lookupPath(path,{follow:!dontFollow});node=lookup.node}else{node=path}if(!node.node_ops.setattr){throw new FS.ErrnoError(63)}node.node_ops.setattr(node,{timestamp:Date.now()})},lchown:(path,uid,gid)=>{FS.chown(path,uid,gid,true)},fchown:(fd,uid,gid)=>{var stream=FS.getStream(fd);if(!stream){throw new FS.ErrnoError(8)}FS.chown(stream.node,uid,gid)},truncate:(path,len)=>{if(len<0){throw new FS.ErrnoError(28)}var node;if(typeof path=="string"){var lookup=FS.lookupPath(path,{follow:true});node=lookup.node}else{node=path}if(!node.node_ops.setattr){throw new FS.ErrnoError(63)}if(FS.isDir(node.mode)){throw new FS.ErrnoError(31)}if(!FS.isFile(node.mode)){throw new FS.ErrnoError(28)}var errCode=FS.nodePermissions(node,"w");if(errCode){throw new FS.ErrnoError(errCode)}node.node_ops.setattr(node,{size:len,timestamp:Date.now()})},ftruncate:(fd,len)=>{var stream=FS.getStream(fd);if(!stream){throw new FS.ErrnoError(8)}if((stream.flags&2097155)===0){throw new FS.ErrnoError(28)}FS.truncate(stream.node,len)},utime:(path,atime,mtime)=>{var lookup=FS.lookupPath(path,{follow:true});var node=lookup.node;node.node_ops.setattr(node,{timestamp:Math.max(atime,mtime)})},open:(path,flags,mode,fd_start,fd_end)=>{if(path===""){throw new FS.ErrnoError(44)}flags=typeof flags=="string"?FS.modeStringToFlags(flags):flags;mode=typeof mode=="undefined"?438:mode;if(flags&64){mode=mode&4095|32768}else{mode=0}var node;if(typeof path=="object"){node=path}else{path=PATH.normalize(path);try{var lookup=FS.lookupPath(path,{follow:!(flags&131072)});node=lookup.node}catch(e){}}var created=false;if(flags&64){if(node){if(flags&128){throw new FS.ErrnoError(20)}}else{node=FS.mknod(path,mode,0);created=true}}if(!node){throw new FS.ErrnoError(44)}if(FS.isChrdev(node.mode)){flags&=~512}if(flags&65536&&!FS.isDir(node.mode)){throw new FS.ErrnoError(54)}if(!created){var errCode=FS.mayOpen(node,flags);if(errCode){throw new FS.ErrnoError(errCode)}}if(flags&512){FS.truncate(node,0)}flags&=~(128|512|131072);var stream=FS.createStream({node:node,path:FS.getPath(node),flags:flags,seekable:true,position:0,stream_ops:node.stream_ops,ungotten:[],error:false},fd_start,fd_end);if(stream.stream_ops.open){stream.stream_ops.open(stream)}if(Module["logReadFiles"]&&!(flags&1)){if(!FS.readFiles)FS.readFiles={};if(!(path in FS.readFiles)){FS.readFiles[path]=1}}return stream},close:stream=>{if(FS.isClosed(stream)){throw new FS.ErrnoError(8)}if(stream.getdents)stream.getdents=null;try{if(stream.stream_ops.close){stream.stream_ops.close(stream)}}catch(e){throw e}finally{FS.closeStream(stream.fd)}stream.fd=null},isClosed:stream=>{return stream.fd===null},llseek:(stream,offset,whence)=>{if(FS.isClosed(stream)){throw new FS.ErrnoError(8)}if(!stream.seekable||!stream.stream_ops.llseek){throw new FS.ErrnoError(70)}if(whence!=0&&whence!=1&&whence!=2){throw new FS.ErrnoError(28)}stream.position=stream.stream_ops.llseek(stream,offset,whence);stream.ungotten=[];return stream.position},read:(stream,buffer,offset,length,position)=>{if(length<0||position<0){throw new FS.ErrnoError(28)}if(FS.isClosed(stream)){throw new FS.ErrnoError(8)}if((stream.flags&2097155)===1){throw new FS.ErrnoError(8)}if(FS.isDir(stream.node.mode)){throw new FS.ErrnoError(31)}if(!stream.stream_ops.read){throw new FS.ErrnoError(28)}var seeking=typeof position!="undefined";if(!seeking){position=stream.position}else if(!stream.seekable){throw new FS.ErrnoError(70)}var bytesRead=stream.stream_ops.read(stream,buffer,offset,length,position);if(!seeking)stream.position+=bytesRead;return bytesRead},write:(stream,buffer,offset,length,position,canOwn)=>{if(length<0||position<0){throw new FS.ErrnoError(28)}if(FS.isClosed(stream)){throw new FS.ErrnoError(8)}if((stream.flags&2097155)===0){throw new FS.ErrnoError(8)}if(FS.isDir(stream.node.mode)){throw new FS.ErrnoError(31)}if(!stream.stream_ops.write){throw new FS.ErrnoError(28)}if(stream.seekable&&stream.flags&1024){FS.llseek(stream,0,2)}var seeking=typeof position!="undefined";if(!seeking){position=stream.position}else if(!stream.seekable){throw new FS.ErrnoError(70)}var bytesWritten=stream.stream_ops.write(stream,buffer,offset,length,position,canOwn);if(!seeking)stream.position+=bytesWritten;return bytesWritten},allocate:(stream,offset,length)=>{if(FS.isClosed(stream)){throw new FS.ErrnoError(8)}if(offset<0||length<=0){throw new FS.ErrnoError(28)}if((stream.flags&2097155)===0){throw new FS.ErrnoError(8)}if(!FS.isFile(stream.node.mode)&&!FS.isDir(stream.node.mode)){throw new FS.ErrnoError(43)}if(!stream.stream_ops.allocate){throw new FS.ErrnoError(138)}stream.stream_ops.allocate(stream,offset,length)},mmap:(stream,address,length,position,prot,flags)=>{if((prot&2)!==0&&(flags&2)===0&&(stream.flags&2097155)!==2){throw new FS.ErrnoError(2)}if((stream.flags&2097155)===1){throw new FS.ErrnoError(2)}if(!stream.stream_ops.mmap){throw new FS.ErrnoError(43)}return stream.stream_ops.mmap(stream,address,length,position,prot,flags)},msync:(stream,buffer,offset,length,mmapFlags)=>{if(!stream||!stream.stream_ops.msync){return 0}return stream.stream_ops.msync(stream,buffer,offset,length,mmapFlags)},munmap:stream=>0,ioctl:(stream,cmd,arg)=>{if(!stream.stream_ops.ioctl){throw new FS.ErrnoError(59)}return stream.stream_ops.ioctl(stream,cmd,arg)},readFile:(path,opts={})=>{opts.flags=opts.flags||0;opts.encoding=opts.encoding||"binary";if(opts.encoding!=="utf8"&&opts.encoding!=="binary"){throw new Error('Invalid encoding type "'+opts.encoding+'"')}var ret;var stream=FS.open(path,opts.flags);var stat=FS.stat(path);var length=stat.size;var buf=new Uint8Array(length);FS.read(stream,buf,0,length,0);if(opts.encoding==="utf8"){ret=UTF8ArrayToString(buf,0)}else if(opts.encoding==="binary"){ret=buf}FS.close(stream);return ret},writeFile:(path,data,opts={})=>{opts.flags=opts.flags||577;var stream=FS.open(path,opts.flags,opts.mode);if(typeof data=="string"){var buf=new Uint8Array(lengthBytesUTF8(data)+1);var actualNumBytes=stringToUTF8Array(data,buf,0,buf.length);FS.write(stream,buf,0,actualNumBytes,undefined,opts.canOwn)}else if(ArrayBuffer.isView(data)){FS.write(stream,data,0,data.byteLength,undefined,opts.canOwn)}else{throw new Error("Unsupported data type")}FS.close(stream)},cwd:()=>FS.currentPath,chdir:path=>{var lookup=FS.lookupPath(path,{follow:true});if(lookup.node===null){throw new FS.ErrnoError(44)}if(!FS.isDir(lookup.node.mode)){throw new FS.ErrnoError(54)}var errCode=FS.nodePermissions(lookup.node,"x");if(errCode){throw new FS.ErrnoError(errCode)}FS.currentPath=lookup.path},createDefaultDirectories:()=>{FS.mkdir("/tmp");FS.mkdir("/home");FS.mkdir("/home/web_user")},createDefaultDevices:()=>{FS.mkdir("/dev");FS.registerDevice(FS.makedev(1,3),{read:()=>0,write:(stream,buffer,offset,length,pos)=>length});FS.mkdev("/dev/null",FS.makedev(1,3));TTY.register(FS.makedev(5,0),TTY.default_tty_ops);TTY.register(FS.makedev(6,0),TTY.default_tty1_ops);FS.mkdev("/dev/tty",FS.makedev(5,0));FS.mkdev("/dev/tty1",FS.makedev(6,0));var random_device=getRandomDevice();FS.createDevice("/dev","random",random_device);FS.createDevice("/dev","urandom",random_device);FS.mkdir("/dev/shm");FS.mkdir("/dev/shm/tmp")},createSpecialDirectories:()=>{FS.mkdir("/proc");var proc_self=FS.mkdir("/proc/self");FS.mkdir("/proc/self/fd");FS.mount({mount:()=>{var node=FS.createNode(proc_self,"fd",16384|511,73);node.node_ops={lookup:(parent,name)=>{var fd=+name;var stream=FS.getStream(fd);if(!stream)throw new FS.ErrnoError(8);var ret={parent:null,mount:{mountpoint:"fake"},node_ops:{readlink:()=>stream.path}};ret.parent=ret;return ret}};return node}},{},"/proc/self/fd")},createStandardStreams:()=>{if(Module["stdin"]){FS.createDevice("/dev","stdin",Module["stdin"])}else{FS.symlink("/dev/tty","/dev/stdin")}if(Module["stdout"]){FS.createDevice("/dev","stdout",null,Module["stdout"])}else{FS.symlink("/dev/tty","/dev/stdout")}if(Module["stderr"]){FS.createDevice("/dev","stderr",null,Module["stderr"])}else{FS.symlink("/dev/tty1","/dev/stderr")}var stdin=FS.open("/dev/stdin",0);var stdout=FS.open("/dev/stdout",1);var stderr=FS.open("/dev/stderr",1)},ensureErrnoError:()=>{if(FS.ErrnoError)return;FS.ErrnoError=function ErrnoError(errno,node){this.node=node;this.setErrno=function(errno){this.errno=errno};this.setErrno(errno);this.message="FS error"};FS.ErrnoError.prototype=new Error;FS.ErrnoError.prototype.constructor=FS.ErrnoError;[44].forEach(code=>{FS.genericErrors[code]=new FS.ErrnoError(code);FS.genericErrors[code].stack=""})},staticInit:()=>{FS.ensureErrnoError();FS.nameTable=new Array(4096);FS.mount(MEMFS,{},"/");FS.createDefaultDirectories();FS.createDefaultDevices();FS.createSpecialDirectories();FS.filesystems={"MEMFS":MEMFS,"IDBFS":IDBFS}},init:(input,output,error)=>{FS.init.initialized=true;FS.ensureErrnoError();Module["stdin"]=input||Module["stdin"];Module["stdout"]=output||Module["stdout"];Module["stderr"]=error||Module["stderr"];FS.createStandardStreams()},quit:()=>{FS.init.initialized=false;for(var i=0;i{var mode=0;if(canRead)mode|=292|73;if(canWrite)mode|=146;return mode},findObject:(path,dontResolveLastLink)=>{var ret=FS.analyzePath(path,dontResolveLastLink);if(ret.exists){return ret.object}else{return null}},analyzePath:(path,dontResolveLastLink)=>{try{var lookup=FS.lookupPath(path,{follow:!dontResolveLastLink});path=lookup.path}catch(e){}var ret={isRoot:false,exists:false,error:0,name:null,path:null,object:null,parentExists:false,parentPath:null,parentObject:null};try{var lookup=FS.lookupPath(path,{parent:true});ret.parentExists=true;ret.parentPath=lookup.path;ret.parentObject=lookup.node;ret.name=PATH.basename(path);lookup=FS.lookupPath(path,{follow:!dontResolveLastLink});ret.exists=true;ret.path=lookup.path;ret.object=lookup.node;ret.name=lookup.node.name;ret.isRoot=lookup.path==="/"}catch(e){ret.error=e.errno}return ret},createPath:(parent,path,canRead,canWrite)=>{parent=typeof parent=="string"?parent:FS.getPath(parent);var parts=path.split("/").reverse();while(parts.length){var part=parts.pop();if(!part)continue;var current=PATH.join2(parent,part);try{FS.mkdir(current)}catch(e){}parent=current}return current},createFile:(parent,name,properties,canRead,canWrite)=>{var path=PATH.join2(typeof parent=="string"?parent:FS.getPath(parent),name);var mode=FS.getMode(canRead,canWrite);return FS.create(path,mode)},createDataFile:(parent,name,data,canRead,canWrite,canOwn)=>{var path=name;if(parent){parent=typeof parent=="string"?parent:FS.getPath(parent);path=name?PATH.join2(parent,name):parent}var mode=FS.getMode(canRead,canWrite);var node=FS.create(path,mode);if(data){if(typeof data=="string"){var arr=new Array(data.length);for(var i=0,len=data.length;i{var path=PATH.join2(typeof parent=="string"?parent:FS.getPath(parent),name);var mode=FS.getMode(!!input,!!output);if(!FS.createDevice.major)FS.createDevice.major=64;var dev=FS.makedev(FS.createDevice.major++,0);FS.registerDevice(dev,{open:stream=>{stream.seekable=false},close:stream=>{if(output&&output.buffer&&output.buffer.length){output(10)}},read:(stream,buffer,offset,length,pos)=>{var bytesRead=0;for(var i=0;i{for(var i=0;i{if(obj.isDevice||obj.isFolder||obj.link||obj.contents)return true;if(typeof XMLHttpRequest!="undefined"){throw new Error("Lazy loading should have been performed (contents set) in createLazyFile, but it was not. Lazy loading only works in web workers. Use --embed-file or --preload-file in emcc on the main thread.")}else if(read_){try{obj.contents=intArrayFromString(read_(obj.url),true);obj.usedBytes=obj.contents.length}catch(e){throw new FS.ErrnoError(29)}}else{throw new Error("Cannot load without read() or XMLHttpRequest.")}},createLazyFile:(parent,name,url,canRead,canWrite)=>{function LazyUint8Array(){this.lengthKnown=false;this.chunks=[]}LazyUint8Array.prototype.get=function LazyUint8Array_get(idx){if(idx>this.length-1||idx<0){return undefined}var chunkOffset=idx%this.chunkSize;var chunkNum=idx/this.chunkSize|0;return this.getter(chunkNum)[chunkOffset]};LazyUint8Array.prototype.setDataGetter=function LazyUint8Array_setDataGetter(getter){this.getter=getter};LazyUint8Array.prototype.cacheLength=function LazyUint8Array_cacheLength(){var xhr=new XMLHttpRequest;xhr.open("HEAD",url,false);xhr.send(null);if(!(xhr.status>=200&&xhr.status<300||xhr.status===304))throw new Error("Couldn't load "+url+". Status: "+xhr.status);var datalength=Number(xhr.getResponseHeader("Content-length"));var header;var hasByteServing=(header=xhr.getResponseHeader("Accept-Ranges"))&&header==="bytes";var usesGzip=(header=xhr.getResponseHeader("Content-Encoding"))&&header==="gzip";var chunkSize=1024*1024;if(!hasByteServing)chunkSize=datalength;var doXHR=(from,to)=>{if(from>to)throw new Error("invalid range ("+from+", "+to+") or no bytes requested!");if(to>datalength-1)throw new Error("only "+datalength+" bytes available! programmer error!");var xhr=new XMLHttpRequest;xhr.open("GET",url,false);if(datalength!==chunkSize)xhr.setRequestHeader("Range","bytes="+from+"-"+to);xhr.responseType="arraybuffer";if(xhr.overrideMimeType){xhr.overrideMimeType("text/plain; charset=x-user-defined")}xhr.send(null);if(!(xhr.status>=200&&xhr.status<300||xhr.status===304))throw new Error("Couldn't load "+url+". Status: "+xhr.status);if(xhr.response!==undefined){return new Uint8Array(xhr.response||[])}else{return intArrayFromString(xhr.responseText||"",true)}};var lazyArray=this;lazyArray.setDataGetter(chunkNum=>{var start=chunkNum*chunkSize;var end=(chunkNum+1)*chunkSize-1;end=Math.min(end,datalength-1);if(typeof lazyArray.chunks[chunkNum]=="undefined"){lazyArray.chunks[chunkNum]=doXHR(start,end)}if(typeof lazyArray.chunks[chunkNum]=="undefined")throw new Error("doXHR failed!");return lazyArray.chunks[chunkNum]});if(usesGzip||!datalength){chunkSize=datalength=1;datalength=this.getter(0).length;chunkSize=datalength;out("LazyFiles on gzip forces download of the whole file when length is accessed")}this._length=datalength;this._chunkSize=chunkSize;this.lengthKnown=true};if(typeof XMLHttpRequest!="undefined"){if(!ENVIRONMENT_IS_WORKER)throw"Cannot do synchronous binary XHRs outside webworkers in modern browsers. Use --embed-file or --preload-file in emcc";var lazyArray=new LazyUint8Array;Object.defineProperties(lazyArray,{length:{get:function(){if(!this.lengthKnown){this.cacheLength()}return this._length}},chunkSize:{get:function(){if(!this.lengthKnown){this.cacheLength()}return this._chunkSize}}});var properties={isDevice:false,contents:lazyArray}}else{var properties={isDevice:false,url:url}}var node=FS.createFile(parent,name,properties,canRead,canWrite);if(properties.contents){node.contents=properties.contents}else if(properties.url){node.contents=null;node.url=properties.url}Object.defineProperties(node,{usedBytes:{get:function(){return this.contents.length}}});var stream_ops={};var keys=Object.keys(node.stream_ops);keys.forEach(key=>{var fn=node.stream_ops[key];stream_ops[key]=function forceLoadLazyFile(){FS.forceLoadFile(node);return fn.apply(null,arguments)}});stream_ops.read=((stream,buffer,offset,length,position)=>{FS.forceLoadFile(node);var contents=stream.node.contents;if(position>=contents.length)return 0;var size=Math.min(contents.length-position,length);if(contents.slice){for(var i=0;i{var fullname=name?PATH_FS.resolve(PATH.join2(parent,name)):parent;var dep=getUniqueRunDependency("cp "+fullname);function processData(byteArray){function finish(byteArray){if(preFinish)preFinish();if(!dontCreateFile){FS.createDataFile(parent,name,byteArray,canRead,canWrite,canOwn)}if(onload)onload();removeRunDependency(dep)}if(Browser.handledByPreloadPlugin(byteArray,fullname,finish,()=>{if(onerror)onerror();removeRunDependency(dep)})){return}finish(byteArray)}addRunDependency(dep);if(typeof url=="string"){asyncLoad(url,byteArray=>processData(byteArray),onerror)}else{processData(url)}},indexedDB:()=>{return window.indexedDB||window.mozIndexedDB||window.webkitIndexedDB||window.msIndexedDB},DB_NAME:()=>{return"EM_FS_"+window.location.pathname},DB_VERSION:20,DB_STORE_NAME:"FILE_DATA",saveFilesToDB:(paths,onload,onerror)=>{onload=onload||(()=>{});onerror=onerror||(()=>{});var indexedDB=FS.indexedDB();try{var openRequest=indexedDB.open(FS.DB_NAME(),FS.DB_VERSION)}catch(e){return onerror(e)}openRequest.onupgradeneeded=(()=>{out("creating db");var db=openRequest.result;db.createObjectStore(FS.DB_STORE_NAME)});openRequest.onsuccess=(()=>{var db=openRequest.result;var transaction=db.transaction([FS.DB_STORE_NAME],"readwrite");var files=transaction.objectStore(FS.DB_STORE_NAME);var ok=0,fail=0,total=paths.length;function finish(){if(fail==0)onload();else onerror()}paths.forEach(path=>{var putRequest=files.put(FS.analyzePath(path).object.contents,path);putRequest.onsuccess=(()=>{ok++;if(ok+fail==total)finish()});putRequest.onerror=(()=>{fail++;if(ok+fail==total)finish()})});transaction.onerror=onerror});openRequest.onerror=onerror},loadFilesFromDB:(paths,onload,onerror)=>{onload=onload||(()=>{});onerror=onerror||(()=>{});var indexedDB=FS.indexedDB();try{var openRequest=indexedDB.open(FS.DB_NAME(),FS.DB_VERSION)}catch(e){return onerror(e)}openRequest.onupgradeneeded=onerror;openRequest.onsuccess=(()=>{var db=openRequest.result;try{var transaction=db.transaction([FS.DB_STORE_NAME],"readonly")}catch(e){onerror(e);return}var files=transaction.objectStore(FS.DB_STORE_NAME);var ok=0,fail=0,total=paths.length;function finish(){if(fail==0)onload();else onerror()}paths.forEach(path=>{var getRequest=files.get(path);getRequest.onsuccess=(()=>{if(FS.analyzePath(path).exists){FS.unlink(path)}FS.createDataFile(PATH.dirname(path),PATH.basename(path),getRequest.result,true,true,true);ok++;if(ok+fail==total)finish()});getRequest.onerror=(()=>{fail++;if(ok+fail==total)finish()})});transaction.onerror=onerror});openRequest.onerror=onerror}};var SYSCALLS={DEFAULT_POLLMASK:5,calculateAt:function(dirfd,path,allowEmpty){if(path[0]==="/"){return path}var dir;if(dirfd===-100){dir=FS.cwd()}else{var dirstream=FS.getStream(dirfd);if(!dirstream)throw new FS.ErrnoError(8);dir=dirstream.path}if(path.length==0){if(!allowEmpty){throw new FS.ErrnoError(44)}return dir}return PATH.join2(dir,path)},doStat:function(func,path,buf){try{var stat=func(path)}catch(e){if(e&&e.node&&PATH.normalize(path)!==PATH.normalize(FS.getPath(e.node))){return-54}throw e}HEAP32[buf>>2]=stat.dev;HEAP32[buf+4>>2]=0;HEAP32[buf+8>>2]=stat.ino;HEAP32[buf+12>>2]=stat.mode;HEAP32[buf+16>>2]=stat.nlink;HEAP32[buf+20>>2]=stat.uid;HEAP32[buf+24>>2]=stat.gid;HEAP32[buf+28>>2]=stat.rdev;HEAP32[buf+32>>2]=0;tempI64=[stat.size>>>0,(tempDouble=stat.size,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[buf+40>>2]=tempI64[0],HEAP32[buf+44>>2]=tempI64[1];HEAP32[buf+48>>2]=4096;HEAP32[buf+52>>2]=stat.blocks;HEAP32[buf+56>>2]=stat.atime.getTime()/1e3|0;HEAP32[buf+60>>2]=0;HEAP32[buf+64>>2]=stat.mtime.getTime()/1e3|0;HEAP32[buf+68>>2]=0;HEAP32[buf+72>>2]=stat.ctime.getTime()/1e3|0;HEAP32[buf+76>>2]=0;tempI64=[stat.ino>>>0,(tempDouble=stat.ino,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[buf+80>>2]=tempI64[0],HEAP32[buf+84>>2]=tempI64[1];return 0},doMsync:function(addr,stream,len,flags,offset){var buffer=HEAPU8.slice(addr,addr+len);FS.msync(stream,buffer,offset,len,flags)},doMkdir:function(path,mode){path=PATH.normalize(path);if(path[path.length-1]==="/")path=path.substr(0,path.length-1);FS.mkdir(path,mode,0);return 0},doMknod:function(path,mode,dev){switch(mode&61440){case 32768:case 8192:case 24576:case 4096:case 49152:break;default:return-28}FS.mknod(path,mode,dev);return 0},doReadlink:function(path,buf,bufsize){if(bufsize<=0)return-28;var ret=FS.readlink(path);var len=Math.min(bufsize,lengthBytesUTF8(ret));var endChar=HEAP8[buf+len];stringToUTF8(ret,buf,bufsize+1);HEAP8[buf+len]=endChar;return len},doAccess:function(path,amode){if(amode&~7){return-28}var lookup=FS.lookupPath(path,{follow:true});var node=lookup.node;if(!node){return-44}var perms="";if(amode&4)perms+="r";if(amode&2)perms+="w";if(amode&1)perms+="x";if(perms&&FS.nodePermissions(node,perms)){return-2}return 0},doReadv:function(stream,iov,iovcnt,offset){var ret=0;for(var i=0;i>2];var len=HEAP32[iov+(i*8+4)>>2];var curr=FS.read(stream,HEAP8,ptr,len,offset);if(curr<0)return-1;ret+=curr;if(curr>2];var len=HEAP32[iov+(i*8+4)>>2];var curr=FS.write(stream,HEAP8,ptr,len,offset);if(curr<0)return-1;ret+=curr}return ret},varargs:undefined,get:function(){SYSCALLS.varargs+=4;var ret=HEAP32[SYSCALLS.varargs-4>>2];return ret},getStr:function(ptr){var ret=UTF8ToString(ptr);return ret},getStreamFromFD:function(fd){var stream=FS.getStream(fd);if(!stream)throw new FS.ErrnoError(8);return stream},get64:function(low,high){return low}};function ___syscall__newselect(nfds,readfds,writefds,exceptfds,timeout){try{var total=0;var srcReadLow=readfds?HEAP32[readfds>>2]:0,srcReadHigh=readfds?HEAP32[readfds+4>>2]:0;var srcWriteLow=writefds?HEAP32[writefds>>2]:0,srcWriteHigh=writefds?HEAP32[writefds+4>>2]:0;var srcExceptLow=exceptfds?HEAP32[exceptfds>>2]:0,srcExceptHigh=exceptfds?HEAP32[exceptfds+4>>2]:0;var dstReadLow=0,dstReadHigh=0;var dstWriteLow=0,dstWriteHigh=0;var dstExceptLow=0,dstExceptHigh=0;var allLow=(readfds?HEAP32[readfds>>2]:0)|(writefds?HEAP32[writefds>>2]:0)|(exceptfds?HEAP32[exceptfds>>2]:0);var allHigh=(readfds?HEAP32[readfds+4>>2]:0)|(writefds?HEAP32[writefds+4>>2]:0)|(exceptfds?HEAP32[exceptfds+4>>2]:0);var check=function(fd,low,high,val){return fd<32?low&val:high&val};for(var fd=0;fd>2]=dstReadLow;HEAP32[readfds+4>>2]=dstReadHigh}if(writefds){HEAP32[writefds>>2]=dstWriteLow;HEAP32[writefds+4>>2]=dstWriteHigh}if(exceptfds){HEAP32[exceptfds>>2]=dstExceptLow;HEAP32[exceptfds+4>>2]=dstExceptHigh}return total}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}var SOCKFS={mount:function(mount){Module["websocket"]=Module["websocket"]&&"object"===typeof Module["websocket"]?Module["websocket"]:{};Module["websocket"]._callbacks={};Module["websocket"]["on"]=function(event,callback){if("function"===typeof callback){this._callbacks[event]=callback}return this};Module["websocket"].emit=function(event,param){if("function"===typeof this._callbacks[event]){this._callbacks[event].call(this,param)}};return FS.createNode(null,"/",16384|511,0)},createSocket:function(family,type,protocol){type&=~526336;var streaming=type==1;if(streaming&&protocol&&protocol!=6){throw new FS.ErrnoError(66)}var sock={family:family,type:type,protocol:protocol,server:null,error:null,peers:{},pending:[],recv_queue:[],sock_ops:SOCKFS.websocket_sock_ops};var name=SOCKFS.nextname();var node=FS.createNode(SOCKFS.root,name,49152,0);node.sock=sock;var stream=FS.createStream({path:name,node:node,flags:2,seekable:false,stream_ops:SOCKFS.stream_ops});sock.stream=stream;return sock},getSocket:function(fd){var stream=FS.getStream(fd);if(!stream||!FS.isSocket(stream.node.mode)){return null}return stream.node.sock},stream_ops:{poll:function(stream){var sock=stream.node.sock;return sock.sock_ops.poll(sock)},ioctl:function(stream,request,varargs){var sock=stream.node.sock;return sock.sock_ops.ioctl(sock,request,varargs)},read:function(stream,buffer,offset,length,position){var sock=stream.node.sock;var msg=sock.sock_ops.recvmsg(sock,length);if(!msg){return 0}buffer.set(msg.buffer,offset);return msg.buffer.length},write:function(stream,buffer,offset,length,position){var sock=stream.node.sock;return sock.sock_ops.sendmsg(sock,buffer,offset,length)},close:function(stream){var sock=stream.node.sock;sock.sock_ops.close(sock)}},nextname:function(){if(!SOCKFS.nextname.current){SOCKFS.nextname.current=0}return"socket["+SOCKFS.nextname.current+++"]"},websocket_sock_ops:{createPeer:function(sock,addr,port){var ws;if(typeof addr=="object"){ws=addr;addr=null;port=null}if(ws){if(ws._socket){addr=ws._socket.remoteAddress;port=ws._socket.remotePort}else{var result=/ws[s]?:\/\/([^:]+):(\d+)/.exec(ws.url);if(!result){throw new Error("WebSocket URL must be in the format ws(s)://address:port")}addr=result[1];port=parseInt(result[2],10)}}else{try{var runtimeConfig=Module["websocket"]&&"object"===typeof Module["websocket"];var url="ws:#".replace("#","//");if(runtimeConfig){if("string"===typeof Module["websocket"]["url"]){url=Module["websocket"]["url"]}}if(url==="ws://"||url==="wss://"){var parts=addr.split("/");url=url+parts[0]+":"+port+"/"+parts.slice(1).join("/")}var subProtocols="binary";if(runtimeConfig){if("string"===typeof Module["websocket"]["subprotocol"]){subProtocols=Module["websocket"]["subprotocol"]}}var opts=undefined;if(subProtocols!=="null"){subProtocols=subProtocols.replace(/^ +| +$/g,"").split(/ *, */);opts=ENVIRONMENT_IS_NODE?{"protocol":subProtocols.toString()}:subProtocols}if(runtimeConfig&&null===Module["websocket"]["subprotocol"]){subProtocols="null";opts=undefined}var WebSocketConstructor;if(ENVIRONMENT_IS_NODE){WebSocketConstructor=require("ws")}else{WebSocketConstructor=WebSocket}ws=new WebSocketConstructor(url,opts);ws.binaryType="arraybuffer"}catch(e){throw new FS.ErrnoError(23)}}var peer={addr:addr,port:port,socket:ws,dgram_send_queue:[]};SOCKFS.websocket_sock_ops.addPeer(sock,peer);SOCKFS.websocket_sock_ops.handlePeerEvents(sock,peer);if(sock.type===2&&typeof sock.sport!="undefined"){peer.dgram_send_queue.push(new Uint8Array([255,255,255,255,"p".charCodeAt(0),"o".charCodeAt(0),"r".charCodeAt(0),"t".charCodeAt(0),(sock.sport&65280)>>8,sock.sport&255]))}return peer},getPeer:function(sock,addr,port){return sock.peers[addr+":"+port]},addPeer:function(sock,peer){sock.peers[peer.addr+":"+peer.port]=peer},removePeer:function(sock,peer){delete sock.peers[peer.addr+":"+peer.port]},handlePeerEvents:function(sock,peer){var first=true;var handleOpen=function(){Module["websocket"].emit("open",sock.stream.fd);try{var queued=peer.dgram_send_queue.shift();while(queued){peer.socket.send(queued);queued=peer.dgram_send_queue.shift()}}catch(e){peer.socket.close()}};function handleMessage(data){if(typeof data=="string"){var encoder=new TextEncoder;data=encoder.encode(data)}else{assert(data.byteLength!==undefined);if(data.byteLength==0){return}else{data=new Uint8Array(data)}}var wasfirst=first;first=false;if(wasfirst&&data.length===10&&data[0]===255&&data[1]===255&&data[2]===255&&data[3]===255&&data[4]==="p".charCodeAt(0)&&data[5]==="o".charCodeAt(0)&&data[6]==="r".charCodeAt(0)&&data[7]==="t".charCodeAt(0)){var newport=data[8]<<8|data[9];SOCKFS.websocket_sock_ops.removePeer(sock,peer);peer.port=newport;SOCKFS.websocket_sock_ops.addPeer(sock,peer);return}sock.recv_queue.push({addr:peer.addr,port:peer.port,data:data});Module["websocket"].emit("message",sock.stream.fd)}if(ENVIRONMENT_IS_NODE){peer.socket.on("open",handleOpen);peer.socket.on("message",function(data,flags){if(!flags.binary){return}handleMessage(new Uint8Array(data).buffer)});peer.socket.on("close",function(){Module["websocket"].emit("close",sock.stream.fd)});peer.socket.on("error",function(error){sock.error=14;Module["websocket"].emit("error",[sock.stream.fd,sock.error,"ECONNREFUSED: Connection refused"])})}else{peer.socket.onopen=handleOpen;peer.socket.onclose=function(){Module["websocket"].emit("close",sock.stream.fd)};peer.socket.onmessage=function peer_socket_onmessage(event){handleMessage(event.data)};peer.socket.onerror=function(error){sock.error=14;Module["websocket"].emit("error",[sock.stream.fd,sock.error,"ECONNREFUSED: Connection refused"])}}},poll:function(sock){if(sock.type===1&&sock.server){return sock.pending.length?64|1:0}var mask=0;var dest=sock.type===1?SOCKFS.websocket_sock_ops.getPeer(sock,sock.daddr,sock.dport):null;if(sock.recv_queue.length||!dest||dest&&dest.socket.readyState===dest.socket.CLOSING||dest&&dest.socket.readyState===dest.socket.CLOSED){mask|=64|1}if(!dest||dest&&dest.socket.readyState===dest.socket.OPEN){mask|=4}if(dest&&dest.socket.readyState===dest.socket.CLOSING||dest&&dest.socket.readyState===dest.socket.CLOSED){mask|=16}return mask},ioctl:function(sock,request,arg){switch(request){case 21531:var bytes=0;if(sock.recv_queue.length){bytes=sock.recv_queue[0].data.length}HEAP32[arg>>2]=bytes;return 0;default:return 28}},close:function(sock){if(sock.server){try{sock.server.close()}catch(e){}sock.server=null}var peers=Object.keys(sock.peers);for(var i=0;i>2]=value;return value}function inetPton4(str){var b=str.split(".");for(var i=0;i<4;i++){var tmp=Number(b[i]);if(isNaN(tmp))return null;b[i]=tmp}return(b[0]|b[1]<<8|b[2]<<16|b[3]<<24)>>>0}function jstoi_q(str){return parseInt(str)}function inetPton6(str){var words;var w,offset,z;var valid6regx=/^((?=.*::)(?!.*::.+::)(::)?([\dA-F]{1,4}:(:|\b)|){5}|([\dA-F]{1,4}:){6})((([\dA-F]{1,4}((?!\3)::|:\b|$))|(?!\2\3)){2}|(((2[0-4]|1\d|[1-9])?\d|25[0-5])\.?\b){4})$/i;var parts=[];if(!valid6regx.test(str)){return null}if(str==="::"){return[0,0,0,0,0,0,0,0]}if(str.startsWith("::")){str=str.replace("::","Z:")}else{str=str.replace("::",":Z:")}if(str.indexOf(".")>0){str=str.replace(new RegExp("[.]","g"),":");words=str.split(":");words[words.length-4]=jstoi_q(words[words.length-4])+jstoi_q(words[words.length-3])*256;words[words.length-3]=jstoi_q(words[words.length-2])+jstoi_q(words[words.length-1])*256;words=words.slice(0,words.length-2)}else{words=str.split(":")}offset=0;z=0;for(w=0;w>2]=16}HEAP16[sa>>1]=family;HEAP32[sa+4>>2]=addr;HEAP16[sa+2>>1]=_htons(port);break;case 10:addr=inetPton6(addr);zeroMemory(sa,28);if(addrlen){HEAP32[addrlen>>2]=28}HEAP32[sa>>2]=family;HEAP32[sa+8>>2]=addr[0];HEAP32[sa+12>>2]=addr[1];HEAP32[sa+16>>2]=addr[2];HEAP32[sa+20>>2]=addr[3];HEAP16[sa+2>>1]=_htons(port);break;default:return 5}return 0}var DNS={address_map:{id:1,addrs:{},names:{}},lookup_name:function(name){var res=inetPton4(name);if(res!==null){return name}res=inetPton6(name);if(res!==null){return name}var addr;if(DNS.address_map.addrs[name]){addr=DNS.address_map.addrs[name]}else{var id=DNS.address_map.id++;assert(id<65535,"exceeded max address mappings of 65535");addr="172.29."+(id&255)+"."+(id&65280);DNS.address_map.names[addr]=name;DNS.address_map.addrs[name]=addr}return addr},lookup_addr:function(addr){if(DNS.address_map.names[addr]){return DNS.address_map.names[addr]}return null}};function ___syscall_accept4(fd,addr,addrlen,flags){try{var sock=getSocketFromFD(fd);var newsock=sock.sock_ops.accept(sock);if(addr){var errno=writeSockaddr(addr,newsock.family,DNS.lookup_name(newsock.daddr),newsock.dport,addrlen)}return newsock.stream.fd}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function inetNtop4(addr){return(addr&255)+"."+(addr>>8&255)+"."+(addr>>16&255)+"."+(addr>>24&255)}function inetNtop6(ints){var str="";var word=0;var longest=0;var lastzero=0;var zstart=0;var len=0;var i=0;var parts=[ints[0]&65535,ints[0]>>16,ints[1]&65535,ints[1]>>16,ints[2]&65535,ints[2]>>16,ints[3]&65535,ints[3]>>16];var hasipv4=true;var v4part="";for(i=0;i<5;i++){if(parts[i]!==0){hasipv4=false;break}}if(hasipv4){v4part=inetNtop4(parts[6]|parts[7]<<16);if(parts[5]===-1){str="::ffff:";str+=v4part;return str}if(parts[5]===0){str="::";if(v4part==="0.0.0.0")v4part="";if(v4part==="0.0.0.1")v4part="1";str+=v4part;return str}}for(word=0;word<8;word++){if(parts[word]===0){if(word-lastzero>1){len=0}lastzero=word;len++}if(len>longest){longest=len;zstart=word-longest+1}}for(word=0;word<8;word++){if(longest>1){if(parts[word]===0&&word>=zstart&&word>1];var port=_ntohs(HEAPU16[sa+2>>1]);var addr;switch(family){case 2:if(salen!==16){return{errno:28}}addr=HEAP32[sa+4>>2];addr=inetNtop4(addr);break;case 10:if(salen!==28){return{errno:28}}addr=[HEAP32[sa+8>>2],HEAP32[sa+12>>2],HEAP32[sa+16>>2],HEAP32[sa+20>>2]];addr=inetNtop6(addr);break;default:return{errno:5}}return{family:family,addr:addr,port:port}}function getSocketAddress(addrp,addrlen,allowNull){if(allowNull&&addrp===0)return null;var info=readSockaddr(addrp,addrlen);if(info.errno)throw new FS.ErrnoError(info.errno);info.addr=DNS.lookup_addr(info.addr)||info.addr;return info}function ___syscall_bind(fd,addr,addrlen){try{var sock=getSocketFromFD(fd);var info=getSocketAddress(addr,addrlen);sock.sock_ops.bind(sock,info.addr,info.port);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_chmod(path,mode){try{path=SYSCALLS.getStr(path);FS.chmod(path,mode);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_connect(fd,addr,addrlen){try{var sock=getSocketFromFD(fd);var info=getSocketAddress(addr,addrlen);sock.sock_ops.connect(sock,info.addr,info.port);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_dup3(fd,suggestFD,flags){try{var old=SYSCALLS.getStreamFromFD(fd);if(old.fd===suggestFD)return-28;var suggest=FS.getStream(suggestFD);if(suggest)FS.close(suggest);return FS.open(old.path,old.flags,0,suggestFD,suggestFD).fd}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_faccessat(dirfd,path,amode,flags){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);return SYSCALLS.doAccess(path,amode)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_fcntl64(fd,cmd,varargs){SYSCALLS.varargs=varargs;try{var stream=SYSCALLS.getStreamFromFD(fd);switch(cmd){case 0:{var arg=SYSCALLS.get();if(arg<0){return-28}var newStream;newStream=FS.open(stream.path,stream.flags,0,arg);return newStream.fd}case 1:case 2:return 0;case 3:return stream.flags;case 4:{var arg=SYSCALLS.get();stream.flags|=arg;return 0}case 5:{var arg=SYSCALLS.get();var offset=0;HEAP16[arg+offset>>1]=2;return 0}case 6:case 7:return 0;case 16:case 8:return-28;case 9:setErrNo(28);return-1;default:{return-28}}}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_fstat64(fd,buf){try{var stream=SYSCALLS.getStreamFromFD(fd);return SYSCALLS.doStat(FS.stat,stream.path,buf)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_ftruncate64(fd,low,high){try{var length=SYSCALLS.get64(low,high);FS.ftruncate(fd,length);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_getcwd(buf,size){try{if(size===0)return-28;var cwd=FS.cwd();var cwdLengthInBytes=lengthBytesUTF8(cwd);if(size>>0,(tempDouble=id,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[dirp+pos>>2]=tempI64[0],HEAP32[dirp+pos+4>>2]=tempI64[1];tempI64=[(idx+1)*struct_size>>>0,(tempDouble=(idx+1)*struct_size,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[dirp+pos+8>>2]=tempI64[0],HEAP32[dirp+pos+12>>2]=tempI64[1];HEAP16[dirp+pos+16>>1]=280;HEAP8[dirp+pos+18>>0]=type;stringToUTF8(name,dirp+pos+19,256);pos+=struct_size;idx+=1}FS.llseek(stream,idx*struct_size,0);return pos}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_getpeername(fd,addr,addrlen){try{var sock=getSocketFromFD(fd);if(!sock.daddr){return-53}var errno=writeSockaddr(addr,sock.family,DNS.lookup_name(sock.daddr),sock.dport,addrlen);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_getsockname(fd,addr,addrlen){try{err("__syscall_getsockname "+fd);var sock=getSocketFromFD(fd);var errno=writeSockaddr(addr,sock.family,DNS.lookup_name(sock.saddr||"0.0.0.0"),sock.sport,addrlen);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_getsockopt(fd,level,optname,optval,optlen){try{var sock=getSocketFromFD(fd);if(level===1){if(optname===4){HEAP32[optval>>2]=sock.error;HEAP32[optlen>>2]=4;sock.error=null;return 0}}return-50}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_ioctl(fd,op,varargs){SYSCALLS.varargs=varargs;try{var stream=SYSCALLS.getStreamFromFD(fd);switch(op){case 21509:case 21505:{if(!stream.tty)return-59;return 0}case 21510:case 21511:case 21512:case 21506:case 21507:case 21508:{if(!stream.tty)return-59;return 0}case 21519:{if(!stream.tty)return-59;var argp=SYSCALLS.get();HEAP32[argp>>2]=0;return 0}case 21520:{if(!stream.tty)return-59;return-28}case 21531:{var argp=SYSCALLS.get();return FS.ioctl(stream,op,argp)}case 21523:{if(!stream.tty)return-59;return 0}case 21524:{if(!stream.tty)return-59;return 0}default:abort("bad ioctl syscall "+op)}}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_listen(fd,backlog){try{var sock=getSocketFromFD(fd);sock.sock_ops.listen(sock,backlog);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_lstat64(path,buf){try{path=SYSCALLS.getStr(path);return SYSCALLS.doStat(FS.lstat,path,buf)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_mkdir(path,mode){try{path=SYSCALLS.getStr(path);return SYSCALLS.doMkdir(path,mode)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_newfstatat(dirfd,path,buf,flags){try{path=SYSCALLS.getStr(path);var nofollow=flags&256;var allowEmpty=flags&4096;flags=flags&~4352;path=SYSCALLS.calculateAt(dirfd,path,allowEmpty);return SYSCALLS.doStat(nofollow?FS.lstat:FS.stat,path,buf)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_openat(dirfd,path,flags,varargs){SYSCALLS.varargs=varargs;try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);var mode=varargs?SYSCALLS.get():0;return FS.open(path,flags,mode).fd}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}var PIPEFS={BUCKET_BUFFER_SIZE:8192,mount:function(mount){return FS.createNode(null,"/",16384|511,0)},createPipe:function(){var pipe={buckets:[],refcnt:2};pipe.buckets.push({buffer:new Uint8Array(PIPEFS.BUCKET_BUFFER_SIZE),offset:0,roffset:0});var rName=PIPEFS.nextname();var wName=PIPEFS.nextname();var rNode=FS.createNode(PIPEFS.root,rName,4096,0);var wNode=FS.createNode(PIPEFS.root,wName,4096,0);rNode.pipe=pipe;wNode.pipe=pipe;var readableStream=FS.createStream({path:rName,node:rNode,flags:0,seekable:false,stream_ops:PIPEFS.stream_ops});rNode.stream=readableStream;var writableStream=FS.createStream({path:wName,node:wNode,flags:1,seekable:false,stream_ops:PIPEFS.stream_ops});wNode.stream=writableStream;return{readable_fd:readableStream.fd,writable_fd:writableStream.fd}},stream_ops:{poll:function(stream){var pipe=stream.node.pipe;if((stream.flags&2097155)===1){return 256|4}else{if(pipe.buckets.length>0){for(var i=0;i0){return 64|1}}}}return 0},ioctl:function(stream,request,varargs){return 28},fsync:function(stream){return 28},read:function(stream,buffer,offset,length,position){var pipe=stream.node.pipe;var currentLength=0;for(var i=0;i=dataLen){currBucket.buffer.set(data,currBucket.offset);currBucket.offset+=dataLen;return dataLen}else if(freeBytesInCurrBuffer>0){currBucket.buffer.set(data.subarray(0,freeBytesInCurrBuffer),currBucket.offset);currBucket.offset+=freeBytesInCurrBuffer;data=data.subarray(freeBytesInCurrBuffer,data.byteLength)}var numBuckets=data.byteLength/PIPEFS.BUCKET_BUFFER_SIZE|0;var remElements=data.byteLength%PIPEFS.BUCKET_BUFFER_SIZE;for(var i=0;i0){var newBucket={buffer:new Uint8Array(PIPEFS.BUCKET_BUFFER_SIZE),offset:data.byteLength,roffset:0};pipe.buckets.push(newBucket);newBucket.buffer.set(data)}return dataLen},close:function(stream){var pipe=stream.node.pipe;pipe.refcnt--;if(pipe.refcnt===0){pipe.buckets=null}}},nextname:function(){if(!PIPEFS.nextname.current){PIPEFS.nextname.current=0}return"pipe["+PIPEFS.nextname.current+++"]"}};function ___syscall_pipe(fdPtr){try{if(fdPtr==0){throw new FS.ErrnoError(21)}var res=PIPEFS.createPipe();HEAP32[fdPtr>>2]=res.readable_fd;HEAP32[fdPtr+4>>2]=res.writable_fd;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_poll(fds,nfds,timeout){try{var nonzero=0;for(var i=0;i>2];var events=HEAP16[pollfd+4>>1];var mask=32;var stream=FS.getStream(fd);if(stream){mask=SYSCALLS.DEFAULT_POLLMASK;if(stream.stream_ops.poll){mask=stream.stream_ops.poll(stream)}}mask&=events|8|16;if(mask)nonzero++;HEAP16[pollfd+6>>1]=mask}return nonzero}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_readlinkat(dirfd,path,buf,bufsize){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);return SYSCALLS.doReadlink(path,buf,bufsize)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_recvfrom(fd,buf,len,flags,addr,addrlen){try{var sock=getSocketFromFD(fd);var msg=sock.sock_ops.recvmsg(sock,len);if(!msg)return 0;if(addr){var errno=writeSockaddr(addr,sock.family,DNS.lookup_name(msg.addr),msg.port,addrlen)}HEAPU8.set(msg.buffer,buf);return msg.buffer.byteLength}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_recvmsg(fd,message,flags){try{var sock=getSocketFromFD(fd);var iov=HEAP32[message+8>>2];var num=HEAP32[message+12>>2];var total=0;for(var i=0;i>2]}var msg=sock.sock_ops.recvmsg(sock,total);if(!msg)return 0;var name=HEAP32[message>>2];if(name){var errno=writeSockaddr(name,sock.family,DNS.lookup_name(msg.addr),msg.port)}var bytesRead=0;var bytesRemaining=msg.buffer.byteLength;for(var i=0;bytesRemaining>0&&i>2];var iovlen=HEAP32[iov+(8*i+4)>>2];if(!iovlen){continue}var length=Math.min(iovlen,bytesRemaining);var buf=msg.buffer.subarray(bytesRead,bytesRead+length);HEAPU8.set(buf,iovbase+bytesRead);bytesRead+=length;bytesRemaining-=length}return bytesRead}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_renameat(olddirfd,oldpath,newdirfd,newpath){try{oldpath=SYSCALLS.getStr(oldpath);newpath=SYSCALLS.getStr(newpath);oldpath=SYSCALLS.calculateAt(olddirfd,oldpath);newpath=SYSCALLS.calculateAt(newdirfd,newpath);FS.rename(oldpath,newpath);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_rmdir(path){try{path=SYSCALLS.getStr(path);FS.rmdir(path);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_sendmsg(fd,message,flags){try{var sock=getSocketFromFD(fd);var iov=HEAP32[message+8>>2];var num=HEAP32[message+12>>2];var addr,port;var name=HEAP32[message>>2];var namelen=HEAP32[message+4>>2];if(name){var info=readSockaddr(name,namelen);if(info.errno)return-info.errno;port=info.port;addr=DNS.lookup_addr(info.addr)||info.addr}var total=0;for(var i=0;i>2]}var view=new Uint8Array(total);var offset=0;for(var i=0;i>2];var iovlen=HEAP32[iov+(8*i+4)>>2];for(var j=0;j>0]}}return sock.sock_ops.sendmsg(sock,view,0,total,addr,port)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_sendto(fd,message,length,flags,addr,addr_len){try{var sock=getSocketFromFD(fd);var dest=getSocketAddress(addr,addr_len,true);if(!dest){return FS.write(sock.stream,HEAP8,message,length)}else{return sock.sock_ops.sendmsg(sock,HEAP8,message,length,dest.addr,dest.port)}}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_socket(domain,type,protocol){try{var sock=SOCKFS.createSocket(domain,type,protocol);return sock.stream.fd}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_stat64(path,buf){try{path=SYSCALLS.getStr(path);return SYSCALLS.doStat(FS.stat,path,buf)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_statfs64(path,size,buf){try{path=SYSCALLS.getStr(path);HEAP32[buf+4>>2]=4096;HEAP32[buf+40>>2]=4096;HEAP32[buf+8>>2]=1e6;HEAP32[buf+12>>2]=5e5;HEAP32[buf+16>>2]=5e5;HEAP32[buf+20>>2]=FS.nextInode;HEAP32[buf+24>>2]=1e6;HEAP32[buf+28>>2]=42;HEAP32[buf+44>>2]=2;HEAP32[buf+36>>2]=255;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_truncate64(path,low,high){try{path=SYSCALLS.getStr(path);var length=SYSCALLS.get64(low,high);FS.truncate(path,length);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_unlinkat(dirfd,path,flags){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);if(flags===0){FS.unlink(path)}else if(flags===512){FS.rmdir(path)}else{abort("Invalid flags passed to unlinkat")}return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_utimensat(dirfd,path,times,flags){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path,true);if(!times){var atime=Date.now();var mtime=atime}else{var seconds=HEAP32[times>>2];var nanoseconds=HEAP32[times+4>>2];atime=seconds*1e3+nanoseconds/(1e3*1e3);times+=8;seconds=HEAP32[times>>2];nanoseconds=HEAP32[times+4>>2];mtime=seconds*1e3+nanoseconds/(1e3*1e3)}FS.utime(path,atime,mtime);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}var dlopen_main_init=0;function __dlopen_js(handle){var ret=!dlopen_main_init;dlopen_main_init=1;return ret}function __dlsym_js(handle,symbol){return 0}function __emscripten_date_now(){return Date.now()}var nowIsMonotonic=true;function __emscripten_get_now_is_monotonic(){return nowIsMonotonic}function __emscripten_throw_longjmp(){throw Infinity}function __gmtime_js(time,tmPtr){var date=new Date(HEAP32[time>>2]*1e3);HEAP32[tmPtr>>2]=date.getUTCSeconds();HEAP32[tmPtr+4>>2]=date.getUTCMinutes();HEAP32[tmPtr+8>>2]=date.getUTCHours();HEAP32[tmPtr+12>>2]=date.getUTCDate();HEAP32[tmPtr+16>>2]=date.getUTCMonth();HEAP32[tmPtr+20>>2]=date.getUTCFullYear()-1900;HEAP32[tmPtr+24>>2]=date.getUTCDay();var start=Date.UTC(date.getUTCFullYear(),0,1,0,0,0,0);var yday=(date.getTime()-start)/(1e3*60*60*24)|0;HEAP32[tmPtr+28>>2]=yday}function __localtime_js(time,tmPtr){var date=new Date(HEAP32[time>>2]*1e3);HEAP32[tmPtr>>2]=date.getSeconds();HEAP32[tmPtr+4>>2]=date.getMinutes();HEAP32[tmPtr+8>>2]=date.getHours();HEAP32[tmPtr+12>>2]=date.getDate();HEAP32[tmPtr+16>>2]=date.getMonth();HEAP32[tmPtr+20>>2]=date.getFullYear()-1900;HEAP32[tmPtr+24>>2]=date.getDay();var start=new Date(date.getFullYear(),0,1);var yday=(date.getTime()-start.getTime())/(1e3*60*60*24)|0;HEAP32[tmPtr+28>>2]=yday;HEAP32[tmPtr+36>>2]=-(date.getTimezoneOffset()*60);var summerOffset=new Date(date.getFullYear(),6,1).getTimezoneOffset();var winterOffset=start.getTimezoneOffset();var dst=(summerOffset!=winterOffset&&date.getTimezoneOffset()==Math.min(winterOffset,summerOffset))|0;HEAP32[tmPtr+32>>2]=dst}function __mktime_js(tmPtr){var date=new Date(HEAP32[tmPtr+20>>2]+1900,HEAP32[tmPtr+16>>2],HEAP32[tmPtr+12>>2],HEAP32[tmPtr+8>>2],HEAP32[tmPtr+4>>2],HEAP32[tmPtr>>2],0);var dst=HEAP32[tmPtr+32>>2];var guessedOffset=date.getTimezoneOffset();var start=new Date(date.getFullYear(),0,1);var summerOffset=new Date(date.getFullYear(),6,1).getTimezoneOffset();var winterOffset=start.getTimezoneOffset();var dstOffset=Math.min(winterOffset,summerOffset);if(dst<0){HEAP32[tmPtr+32>>2]=Number(summerOffset!=winterOffset&&dstOffset==guessedOffset)}else if(dst>0!=(dstOffset==guessedOffset)){var nonDstOffset=Math.max(winterOffset,summerOffset);var trueOffset=dst>0?dstOffset:nonDstOffset;date.setTime(date.getTime()+(trueOffset-guessedOffset)*6e4)}HEAP32[tmPtr+24>>2]=date.getDay();var yday=(date.getTime()-start.getTime())/(1e3*60*60*24)|0;HEAP32[tmPtr+28>>2]=yday;HEAP32[tmPtr>>2]=date.getSeconds();HEAP32[tmPtr+4>>2]=date.getMinutes();HEAP32[tmPtr+8>>2]=date.getHours();HEAP32[tmPtr+12>>2]=date.getDate();HEAP32[tmPtr+16>>2]=date.getMonth();return date.getTime()/1e3|0}function __mmap_js(addr,len,prot,flags,fd,off,allocated,builtin){try{var info=FS.getStream(fd);if(!info)return-8;var res=FS.mmap(info,addr,len,off,prot,flags);var ptr=res.ptr;HEAP32[allocated>>2]=res.allocated;return ptr}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function __munmap_js(addr,len,prot,flags,fd,offset){try{var stream=FS.getStream(fd);if(stream){if(prot&2){SYSCALLS.doMsync(addr,stream,len,flags,offset)}FS.munmap(stream)}}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function _tzset_impl(timezone,daylight,tzname){var currentYear=(new Date).getFullYear();var winter=new Date(currentYear,0,1);var summer=new Date(currentYear,6,1);var winterOffset=winter.getTimezoneOffset();var summerOffset=summer.getTimezoneOffset();var stdTimezoneOffset=Math.max(winterOffset,summerOffset);HEAP32[timezone>>2]=stdTimezoneOffset*60;HEAP32[daylight>>2]=Number(winterOffset!=summerOffset);function extractZone(date){var match=date.toTimeString().match(/\(([A-Za-z ]+)\)$/);return match?match[1]:"GMT"}var winterName=extractZone(winter);var summerName=extractZone(summer);var winterNamePtr=allocateUTF8(winterName);var summerNamePtr=allocateUTF8(summerName);if(summerOffset>2]=winterNamePtr;HEAP32[tzname+4>>2]=summerNamePtr}else{HEAP32[tzname>>2]=summerNamePtr;HEAP32[tzname+4>>2]=winterNamePtr}}function __tzset_js(timezone,daylight,tzname){if(__tzset_js.called)return;__tzset_js.called=true;_tzset_impl(timezone,daylight,tzname)}function _abort(){abort("")}var readAsmConstArgsArray=[];function readAsmConstArgs(sigPtr,buf){readAsmConstArgsArray.length=0;var ch;buf>>=2;while(ch=HEAPU8[sigPtr++]){var readAsmConstArgsDouble=ch<105;if(readAsmConstArgsDouble&&buf&1)buf++;readAsmConstArgsArray.push(readAsmConstArgsDouble?HEAPF64[buf++>>1]:HEAP32[buf]);++buf}return readAsmConstArgsArray}function mainThreadEM_ASM(code,sigPtr,argbuf,sync){var args=readAsmConstArgs(sigPtr,argbuf);return ASM_CONSTS[code].apply(null,args)}function _emscripten_asm_const_int_sync_on_main_thread(code,sigPtr,argbuf){return mainThreadEM_ASM(code,sigPtr,argbuf,1)}function _emscripten_set_main_loop_timing(mode,value){Browser.mainLoop.timingMode=mode;Browser.mainLoop.timingValue=value;if(!Browser.mainLoop.func){return 1}if(!Browser.mainLoop.running){Browser.mainLoop.running=true}if(mode==0){Browser.mainLoop.scheduler=function Browser_mainLoop_scheduler_setTimeout(){var timeUntilNextTick=Math.max(0,Browser.mainLoop.tickStartTime+value-_emscripten_get_now())|0;setTimeout(Browser.mainLoop.runner,timeUntilNextTick)};Browser.mainLoop.method="timeout"}else if(mode==1){Browser.mainLoop.scheduler=function Browser_mainLoop_scheduler_rAF(){Browser.requestAnimationFrame(Browser.mainLoop.runner)};Browser.mainLoop.method="rAF"}else if(mode==2){if(typeof setImmediate=="undefined"){var setImmediates=[];var emscriptenMainLoopMessageId="setimmediate";var Browser_setImmediate_messageHandler=function(event){if(event.data===emscriptenMainLoopMessageId||event.data.target===emscriptenMainLoopMessageId){event.stopPropagation();setImmediates.shift()()}};addEventListener("message",Browser_setImmediate_messageHandler,true);setImmediate=function Browser_emulated_setImmediate(func){setImmediates.push(func);if(ENVIRONMENT_IS_WORKER){if(Module["setImmediates"]===undefined)Module["setImmediates"]=[];Module["setImmediates"].push(func);postMessage({target:emscriptenMainLoopMessageId})}else postMessage(emscriptenMainLoopMessageId,"*")}}Browser.mainLoop.scheduler=function Browser_mainLoop_scheduler_setImmediate(){setImmediate(Browser.mainLoop.runner)};Browser.mainLoop.method="immediate"}return 0}var _emscripten_get_now;if(ENVIRONMENT_IS_NODE){_emscripten_get_now=(()=>{var t=process["hrtime"]();return t[0]*1e3+t[1]/1e6})}else _emscripten_get_now=(()=>performance.now());function _exit(status){exit(status)}function maybeExit(){}function setMainLoop(browserIterationFunc,fps,simulateInfiniteLoop,arg,noSetTiming){assert(!Browser.mainLoop.func,"emscripten_set_main_loop: there can only be one main loop function at once: call emscripten_cancel_main_loop to cancel the previous one before setting a new one with different parameters.");Browser.mainLoop.func=browserIterationFunc;Browser.mainLoop.arg=arg;var thisMainLoopId=Browser.mainLoop.currentlyRunningMainloop;function checkIsRunning(){if(thisMainLoopId0){var start=Date.now();var blocker=Browser.mainLoop.queue.shift();blocker.func(blocker.arg);if(Browser.mainLoop.remainingBlockers){var remaining=Browser.mainLoop.remainingBlockers;var next=remaining%1==0?remaining-1:Math.floor(remaining);if(blocker.counted){Browser.mainLoop.remainingBlockers=next}else{next=next+.5;Browser.mainLoop.remainingBlockers=(8*remaining+next)/9}}out('main loop blocker "'+blocker.name+'" took '+(Date.now()-start)+" ms");Browser.mainLoop.updateStatus();if(!checkIsRunning())return;setTimeout(Browser.mainLoop.runner,0);return}if(!checkIsRunning())return;Browser.mainLoop.currentFrameNumber=Browser.mainLoop.currentFrameNumber+1|0;if(Browser.mainLoop.timingMode==1&&Browser.mainLoop.timingValue>1&&Browser.mainLoop.currentFrameNumber%Browser.mainLoop.timingValue!=0){Browser.mainLoop.scheduler();return}else if(Browser.mainLoop.timingMode==0){Browser.mainLoop.tickStartTime=_emscripten_get_now()}GL.newRenderingFrameStarted();Browser.mainLoop.runIter(browserIterationFunc);if(!checkIsRunning())return;if(typeof SDL=="object"&&SDL.audio&&SDL.audio.queueNewAudioData)SDL.audio.queueNewAudioData();Browser.mainLoop.scheduler()};if(!noSetTiming){if(fps&&fps>0)_emscripten_set_main_loop_timing(0,1e3/fps);else _emscripten_set_main_loop_timing(1,1);Browser.mainLoop.scheduler()}if(simulateInfiniteLoop){throw"unwind"}}function callUserCallback(func,synchronous){if(ABORT){return}if(synchronous){func();return}try{func()}catch(e){handleException(e)}}function safeSetTimeout(func,timeout){return setTimeout(function(){callUserCallback(func)},timeout)}var Browser={mainLoop:{running:false,scheduler:null,method:"",currentlyRunningMainloop:0,func:null,arg:0,timingMode:0,timingValue:0,currentFrameNumber:0,queue:[],pause:function(){Browser.mainLoop.scheduler=null;Browser.mainLoop.currentlyRunningMainloop++},resume:function(){Browser.mainLoop.currentlyRunningMainloop++;var timingMode=Browser.mainLoop.timingMode;var timingValue=Browser.mainLoop.timingValue;var func=Browser.mainLoop.func;Browser.mainLoop.func=null;setMainLoop(func,0,false,Browser.mainLoop.arg,true);_emscripten_set_main_loop_timing(timingMode,timingValue);Browser.mainLoop.scheduler()},updateStatus:function(){if(Module["setStatus"]){var message=Module["statusMessage"]||"Please wait...";var remaining=Browser.mainLoop.remainingBlockers;var expected=Browser.mainLoop.expectedBlockers;if(remaining){if(remaining{assert(img.complete,"Image "+name+" could not be decoded");var canvas=document.createElement("canvas");canvas.width=img.width;canvas.height=img.height;var ctx=canvas.getContext("2d");ctx.drawImage(img,0,0);Module["preloadedImages"][name]=canvas;Browser.URLObject.revokeObjectURL(url);if(onload)onload(byteArray)});img.onerror=(event=>{out("Image "+url+" could not be decoded");if(onerror)onerror()});img.src=url};Module["preloadPlugins"].push(imagePlugin);var audioPlugin={};audioPlugin["canHandle"]=function audioPlugin_canHandle(name){return!Module.noAudioDecoding&&name.substr(-4)in{".ogg":1,".wav":1,".mp3":1}};audioPlugin["handle"]=function audioPlugin_handle(byteArray,name,onload,onerror){var done=false;function finish(audio){if(done)return;done=true;Module["preloadedAudios"][name]=audio;if(onload)onload(byteArray)}function fail(){if(done)return;done=true;Module["preloadedAudios"][name]=new Audio;if(onerror)onerror()}if(Browser.hasBlobConstructor){try{var b=new Blob([byteArray],{type:Browser.getMimetype(name)})}catch(e){return fail()}var url=Browser.URLObject.createObjectURL(b);var audio=new Audio;audio.addEventListener("canplaythrough",function(){finish(audio)},false);audio.onerror=function audio_onerror(event){if(done)return;out("warning: browser could not fully decode audio "+name+", trying slower base64 approach");function encode64(data){var BASE="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";var PAD="=";var ret="";var leftchar=0;var leftbits=0;for(var i=0;i=6){var curr=leftchar>>leftbits-6&63;leftbits-=6;ret+=BASE[curr]}}if(leftbits==2){ret+=BASE[(leftchar&3)<<4];ret+=PAD+PAD}else if(leftbits==4){ret+=BASE[(leftchar&15)<<2];ret+=PAD}return ret}audio.src="data:audio/x-"+name.substr(-3)+";base64,"+encode64(byteArray);finish(audio)};audio.src=url;safeSetTimeout(function(){finish(audio)},1e4)}else{return fail()}};Module["preloadPlugins"].push(audioPlugin);function pointerLockChange(){Browser.pointerLock=document["pointerLockElement"]===Module["canvas"]||document["mozPointerLockElement"]===Module["canvas"]||document["webkitPointerLockElement"]===Module["canvas"]||document["msPointerLockElement"]===Module["canvas"]}var canvas=Module["canvas"];if(canvas){canvas.requestPointerLock=canvas["requestPointerLock"]||canvas["mozRequestPointerLock"]||canvas["webkitRequestPointerLock"]||canvas["msRequestPointerLock"]||function(){};canvas.exitPointerLock=document["exitPointerLock"]||document["mozExitPointerLock"]||document["webkitExitPointerLock"]||document["msExitPointerLock"]||function(){};canvas.exitPointerLock=canvas.exitPointerLock.bind(document);document.addEventListener("pointerlockchange",pointerLockChange,false);document.addEventListener("mozpointerlockchange",pointerLockChange,false);document.addEventListener("webkitpointerlockchange",pointerLockChange,false);document.addEventListener("mspointerlockchange",pointerLockChange,false);if(Module["elementPointerLock"]){canvas.addEventListener("click",function(ev){if(!Browser.pointerLock&&Module["canvas"].requestPointerLock){Module["canvas"].requestPointerLock();ev.preventDefault()}},false)}}},handledByPreloadPlugin:function(byteArray,fullname,finish,onerror){Browser.init();var handled=false;Module["preloadPlugins"].forEach(function(plugin){if(handled)return;if(plugin["canHandle"](fullname)){plugin["handle"](byteArray,fullname,finish,onerror);handled=true}});return handled},createContext:function(canvas,useWebGL,setInModule,webGLContextAttributes){if(useWebGL&&Module.ctx&&canvas==Module.canvas)return Module.ctx;var ctx;var contextHandle;if(useWebGL){var contextAttributes={antialias:false,alpha:false,majorVersion:typeof WebGL2RenderingContext!="undefined"?2:1};if(webGLContextAttributes){for(var attribute in webGLContextAttributes){contextAttributes[attribute]=webGLContextAttributes[attribute]}}if(typeof GL!="undefined"){contextHandle=GL.createContext(canvas,contextAttributes);if(contextHandle){ctx=GL.getContext(contextHandle).GLctx}}}else{ctx=canvas.getContext("2d")}if(!ctx)return null;if(setInModule){if(!useWebGL)assert(typeof GLctx=="undefined","cannot set in module if GLctx is used, but we are a non-GL context that would replace it");Module.ctx=ctx;if(useWebGL)GL.makeContextCurrent(contextHandle);Module.useWebGL=useWebGL;Browser.moduleContextCreatedCallbacks.forEach(function(callback){callback()});Browser.init()}return ctx},destroyContext:function(canvas,useWebGL,setInModule){},fullscreenHandlersInstalled:false,lockPointer:undefined,resizeCanvas:undefined,requestFullscreen:function(lockPointer,resizeCanvas){Browser.lockPointer=lockPointer;Browser.resizeCanvas=resizeCanvas;if(typeof Browser.lockPointer=="undefined")Browser.lockPointer=true;if(typeof Browser.resizeCanvas=="undefined")Browser.resizeCanvas=false;var canvas=Module["canvas"];function fullscreenChange(){Browser.isFullscreen=false;var canvasContainer=canvas.parentNode;if((document["fullscreenElement"]||document["mozFullScreenElement"]||document["msFullscreenElement"]||document["webkitFullscreenElement"]||document["webkitCurrentFullScreenElement"])===canvasContainer){canvas.exitFullscreen=Browser.exitFullscreen;if(Browser.lockPointer)canvas.requestPointerLock();Browser.isFullscreen=true;if(Browser.resizeCanvas){Browser.setFullscreenCanvasSize()}else{Browser.updateCanvasDimensions(canvas)}}else{canvasContainer.parentNode.insertBefore(canvas,canvasContainer);canvasContainer.parentNode.removeChild(canvasContainer);if(Browser.resizeCanvas){Browser.setWindowedCanvasSize()}else{Browser.updateCanvasDimensions(canvas)}}if(Module["onFullScreen"])Module["onFullScreen"](Browser.isFullscreen);if(Module["onFullscreen"])Module["onFullscreen"](Browser.isFullscreen)}if(!Browser.fullscreenHandlersInstalled){Browser.fullscreenHandlersInstalled=true;document.addEventListener("fullscreenchange",fullscreenChange,false);document.addEventListener("mozfullscreenchange",fullscreenChange,false);document.addEventListener("webkitfullscreenchange",fullscreenChange,false);document.addEventListener("MSFullscreenChange",fullscreenChange,false)}var canvasContainer=document.createElement("div");canvas.parentNode.insertBefore(canvasContainer,canvas);canvasContainer.appendChild(canvas);canvasContainer.requestFullscreen=canvasContainer["requestFullscreen"]||canvasContainer["mozRequestFullScreen"]||canvasContainer["msRequestFullscreen"]||(canvasContainer["webkitRequestFullscreen"]?function(){canvasContainer["webkitRequestFullscreen"](Element["ALLOW_KEYBOARD_INPUT"])}:null)||(canvasContainer["webkitRequestFullScreen"]?function(){canvasContainer["webkitRequestFullScreen"](Element["ALLOW_KEYBOARD_INPUT"])}:null);canvasContainer.requestFullscreen()},exitFullscreen:function(){if(!Browser.isFullscreen){return false}var CFS=document["exitFullscreen"]||document["cancelFullScreen"]||document["mozCancelFullScreen"]||document["msExitFullscreen"]||document["webkitCancelFullScreen"]||function(){};CFS.apply(document,[]);return true},nextRAF:0,fakeRequestAnimationFrame:function(func){var now=Date.now();if(Browser.nextRAF===0){Browser.nextRAF=now+1e3/60}else{while(now+2>=Browser.nextRAF){Browser.nextRAF+=1e3/60}}var delay=Math.max(Browser.nextRAF-now,0);setTimeout(func,delay)},requestAnimationFrame:function(func){if(typeof requestAnimationFrame=="function"){requestAnimationFrame(func);return}var RAF=Browser.fakeRequestAnimationFrame;RAF(func)},safeSetTimeout:function(func){return safeSetTimeout(func)},safeRequestAnimationFrame:function(func){return Browser.requestAnimationFrame(function(){callUserCallback(func)})},getMimetype:function(name){return{"jpg":"image/jpeg","jpeg":"image/jpeg","png":"image/png","bmp":"image/bmp","ogg":"audio/ogg","wav":"audio/wav","mp3":"audio/mpeg"}[name.substr(name.lastIndexOf(".")+1)]},getUserMedia:function(func){if(!window.getUserMedia){window.getUserMedia=navigator["getUserMedia"]||navigator["mozGetUserMedia"]}window.getUserMedia(func)},getMovementX:function(event){return event["movementX"]||event["mozMovementX"]||event["webkitMovementX"]||0},getMovementY:function(event){return event["movementY"]||event["mozMovementY"]||event["webkitMovementY"]||0},getMouseWheelDelta:function(event){var delta=0;switch(event.type){case"DOMMouseScroll":delta=event.detail/3;break;case"mousewheel":delta=event.wheelDelta/120;break;case"wheel":delta=event.deltaY;switch(event.deltaMode){case 0:delta/=100;break;case 1:delta/=3;break;case 2:delta*=80;break;default:throw"unrecognized mouse wheel delta mode: "+event.deltaMode}break;default:throw"unrecognized mouse wheel event: "+event.type}return delta},mouseX:0,mouseY:0,mouseMovementX:0,mouseMovementY:0,touches:{},lastTouches:{},calculateMouseEvent:function(event){if(Browser.pointerLock){if(event.type!="mousemove"&&"mozMovementX"in event){Browser.mouseMovementX=Browser.mouseMovementY=0}else{Browser.mouseMovementX=Browser.getMovementX(event);Browser.mouseMovementY=Browser.getMovementY(event)}if(typeof SDL!="undefined"){Browser.mouseX=SDL.mouseX+Browser.mouseMovementX;Browser.mouseY=SDL.mouseY+Browser.mouseMovementY}else{Browser.mouseX+=Browser.mouseMovementX;Browser.mouseY+=Browser.mouseMovementY}}else{var rect=Module["canvas"].getBoundingClientRect();var cw=Module["canvas"].width;var ch=Module["canvas"].height;var scrollX=typeof window.scrollX!="undefined"?window.scrollX:window.pageXOffset;var scrollY=typeof window.scrollY!="undefined"?window.scrollY:window.pageYOffset;if(event.type==="touchstart"||event.type==="touchend"||event.type==="touchmove"){var touch=event.touch;if(touch===undefined){return}var adjustedX=touch.pageX-(scrollX+rect.left);var adjustedY=touch.pageY-(scrollY+rect.top);adjustedX=adjustedX*(cw/rect.width);adjustedY=adjustedY*(ch/rect.height);var coords={x:adjustedX,y:adjustedY};if(event.type==="touchstart"){Browser.lastTouches[touch.identifier]=coords;Browser.touches[touch.identifier]=coords}else if(event.type==="touchend"||event.type==="touchmove"){var last=Browser.touches[touch.identifier];if(!last)last=coords;Browser.lastTouches[touch.identifier]=last;Browser.touches[touch.identifier]=coords}return}var x=event.pageX-(scrollX+rect.left);var y=event.pageY-(scrollY+rect.top);x=x*(cw/rect.width);y=y*(ch/rect.height);Browser.mouseMovementX=x-Browser.mouseX;Browser.mouseMovementY=y-Browser.mouseY;Browser.mouseX=x;Browser.mouseY=y}},resizeListeners:[],updateResizeListeners:function(){var canvas=Module["canvas"];Browser.resizeListeners.forEach(function(listener){listener(canvas.width,canvas.height)})},setCanvasSize:function(width,height,noUpdates){var canvas=Module["canvas"];Browser.updateCanvasDimensions(canvas,width,height);if(!noUpdates)Browser.updateResizeListeners()},windowedWidth:0,windowedHeight:0,setFullscreenCanvasSize:function(){if(typeof SDL!="undefined"){var flags=HEAPU32[SDL.screen>>2];flags=flags|8388608;HEAP32[SDL.screen>>2]=flags}Browser.updateCanvasDimensions(Module["canvas"]);Browser.updateResizeListeners()},setWindowedCanvasSize:function(){if(typeof SDL!="undefined"){var flags=HEAPU32[SDL.screen>>2];flags=flags&~8388608;HEAP32[SDL.screen>>2]=flags}Browser.updateCanvasDimensions(Module["canvas"]);Browser.updateResizeListeners()},updateCanvasDimensions:function(canvas,wNative,hNative){if(wNative&&hNative){canvas.widthNative=wNative;canvas.heightNative=hNative}else{wNative=canvas.widthNative;hNative=canvas.heightNative}var w=wNative;var h=hNative;if(Module["forcedAspectRatio"]&&Module["forcedAspectRatio"]>0){if(w/h=0;--i){JSEvents._removeHandler(i)}JSEvents.eventHandlers=[];JSEvents.deferredCalls=[]},registerRemoveEventListeners:function(){if(!JSEvents.removeEventListenersRegistered){__ATEXIT__.push(JSEvents.removeAllEventListeners);JSEvents.removeEventListenersRegistered=true}},deferredCalls:[],deferCall:function(targetFunction,precedence,argsList){function arraysHaveEqualContent(arrA,arrB){if(arrA.length!=arrB.length)return false;for(var i in arrA){if(arrA[i]!=arrB[i])return false}return true}for(var i in JSEvents.deferredCalls){var call=JSEvents.deferredCalls[i];if(call.targetFunction==targetFunction&&arraysHaveEqualContent(call.argsList,argsList)){return}}JSEvents.deferredCalls.push({targetFunction:targetFunction,precedence:precedence,argsList:argsList});JSEvents.deferredCalls.sort(function(x,y){return x.precedence2?UTF8ToString(cString):cString}var specialHTMLTargets=[0,typeof document!="undefined"?document:0,typeof window!="undefined"?window:0];function findEventTarget(target){target=maybeCStringToJsString(target);var domElement=specialHTMLTargets[target]||(typeof document!="undefined"?document.querySelector(target):undefined);return domElement}function findCanvasEventTarget(target){return findEventTarget(target)}function _emscripten_get_canvas_element_size(target,width,height){var canvas=findCanvasEventTarget(target);if(!canvas)return-4;HEAP32[width>>2]=canvas.width;HEAP32[height>>2]=canvas.height}function getCanvasElementSize(target){return withStackSave(function(){var w=stackAlloc(8);var h=w+4;var targetInt=stackAlloc(target.id.length+1);stringToUTF8(target.id,targetInt,target.id.length+1);var ret=_emscripten_get_canvas_element_size(targetInt,w,h);var size=[HEAP32[w>>2],HEAP32[h>>2]];return size})}function _emscripten_set_canvas_element_size(target,width,height){var canvas=findCanvasEventTarget(target);if(!canvas)return-4;canvas.width=width;canvas.height=height;return 0}function setCanvasElementSize(target,width,height){if(!target.controlTransferredOffscreen){target.width=width;target.height=height}else{withStackSave(function(){var targetInt=stackAlloc(target.id.length+1);stringToUTF8(target.id,targetInt,target.id.length+1);_emscripten_set_canvas_element_size(targetInt,width,height)})}}function registerRestoreOldStyle(canvas){var canvasSize=getCanvasElementSize(canvas);var oldWidth=canvasSize[0];var oldHeight=canvasSize[1];var oldCssWidth=canvas.style.width;var oldCssHeight=canvas.style.height;var oldBackgroundColor=canvas.style.backgroundColor;var oldDocumentBackgroundColor=document.body.style.backgroundColor;var oldPaddingLeft=canvas.style.paddingLeft;var oldPaddingRight=canvas.style.paddingRight;var oldPaddingTop=canvas.style.paddingTop;var oldPaddingBottom=canvas.style.paddingBottom;var oldMarginLeft=canvas.style.marginLeft;var oldMarginRight=canvas.style.marginRight;var oldMarginTop=canvas.style.marginTop;var oldMarginBottom=canvas.style.marginBottom;var oldDocumentBodyMargin=document.body.style.margin;var oldDocumentOverflow=document.documentElement.style.overflow;var oldDocumentScroll=document.body.scroll;var oldImageRendering=canvas.style.imageRendering;function restoreOldStyle(){var fullscreenElement=document.fullscreenElement||document.webkitFullscreenElement||document.msFullscreenElement;if(!fullscreenElement){document.removeEventListener("fullscreenchange",restoreOldStyle);document.removeEventListener("webkitfullscreenchange",restoreOldStyle);setCanvasElementSize(canvas,oldWidth,oldHeight);canvas.style.width=oldCssWidth;canvas.style.height=oldCssHeight;canvas.style.backgroundColor=oldBackgroundColor;if(!oldDocumentBackgroundColor)document.body.style.backgroundColor="white";document.body.style.backgroundColor=oldDocumentBackgroundColor;canvas.style.paddingLeft=oldPaddingLeft;canvas.style.paddingRight=oldPaddingRight;canvas.style.paddingTop=oldPaddingTop;canvas.style.paddingBottom=oldPaddingBottom;canvas.style.marginLeft=oldMarginLeft;canvas.style.marginRight=oldMarginRight;canvas.style.marginTop=oldMarginTop;canvas.style.marginBottom=oldMarginBottom;document.body.style.margin=oldDocumentBodyMargin;document.documentElement.style.overflow=oldDocumentOverflow;document.body.scroll=oldDocumentScroll;canvas.style.imageRendering=oldImageRendering;if(canvas.GLctxObject)canvas.GLctxObject.GLctx.viewport(0,0,oldWidth,oldHeight);if(currentFullscreenStrategy.canvasResizedCallback){(function(a1,a2,a3){return dynCall_iiii.apply(null,[currentFullscreenStrategy.canvasResizedCallback,a1,a2,a3])})(37,0,currentFullscreenStrategy.canvasResizedCallbackUserData)}}}document.addEventListener("fullscreenchange",restoreOldStyle);document.addEventListener("webkitfullscreenchange",restoreOldStyle);return restoreOldStyle}function setLetterbox(element,topBottom,leftRight){element.style.paddingLeft=element.style.paddingRight=leftRight+"px";element.style.paddingTop=element.style.paddingBottom=topBottom+"px"}function getBoundingClientRect(e){return specialHTMLTargets.indexOf(e)<0?e.getBoundingClientRect():{"left":0,"top":0}}function _JSEvents_resizeCanvasForFullscreen(target,strategy){var restoreOldStyle=registerRestoreOldStyle(target);var cssWidth=strategy.softFullscreen?innerWidth:screen.width;var cssHeight=strategy.softFullscreen?innerHeight:screen.height;var rect=getBoundingClientRect(target);var windowedCssWidth=rect.width;var windowedCssHeight=rect.height;var canvasSize=getCanvasElementSize(target);var windowedRttWidth=canvasSize[0];var windowedRttHeight=canvasSize[1];if(strategy.scaleMode==3){setLetterbox(target,(cssHeight-windowedCssHeight)/2,(cssWidth-windowedCssWidth)/2);cssWidth=windowedCssWidth;cssHeight=windowedCssHeight}else if(strategy.scaleMode==2){if(cssWidth*windowedRttHeight>2]=isFullscreen;HEAP32[eventStruct+4>>2]=JSEvents.fullscreenEnabled();var reportedElement=isFullscreen?fullscreenElement:JSEvents.previousFullscreenElement;var nodeName=JSEvents.getNodeNameForTarget(reportedElement);var id=reportedElement&&reportedElement.id?reportedElement.id:"";stringToUTF8(nodeName,eventStruct+8,128);stringToUTF8(id,eventStruct+136,128);HEAP32[eventStruct+264>>2]=reportedElement?reportedElement.clientWidth:0;HEAP32[eventStruct+268>>2]=reportedElement?reportedElement.clientHeight:0;HEAP32[eventStruct+272>>2]=screen.width;HEAP32[eventStruct+276>>2]=screen.height;if(isFullscreen){JSEvents.previousFullscreenElement=fullscreenElement}}function _emscripten_get_fullscreen_status(fullscreenStatus){if(!JSEvents.fullscreenEnabled())return-1;fillFullscreenChangeEventData(fullscreenStatus);return 0}function fillGamepadEventData(eventStruct,e){HEAPF64[eventStruct>>3]=e.timestamp;for(var i=0;i>3]=e.axes[i]}for(var i=0;i>3]=e.buttons[i].value}else{HEAPF64[eventStruct+i*8+528>>3]=e.buttons[i]}}for(var i=0;i>2]=e.buttons[i].pressed}else{HEAP32[eventStruct+i*4+1040>>2]=e.buttons[i]==1}}HEAP32[eventStruct+1296>>2]=e.connected;HEAP32[eventStruct+1300>>2]=e.index;HEAP32[eventStruct+8>>2]=e.axes.length;HEAP32[eventStruct+12>>2]=e.buttons.length;stringToUTF8(e.id,eventStruct+1304,64);stringToUTF8(e.mapping,eventStruct+1368,64)}function _emscripten_get_gamepad_status(index,gamepadState){if(index<0||index>=JSEvents.lastGamepadState.length)return-5;if(!JSEvents.lastGamepadState[index])return-7;fillGamepadEventData(gamepadState,JSEvents.lastGamepadState[index]);return 0}function _emscripten_get_heap_max(){return 2147483648}function _emscripten_get_now_res(){if(ENVIRONMENT_IS_NODE){return 1}else return 1e3}function _emscripten_get_num_gamepads(){return JSEvents.lastGamepadState.length}function _emscripten_html5_remove_all_event_listeners(){JSEvents.removeAllEventListeners()}function _emscripten_is_webgl_context_lost(contextHandle){return!GL.contexts[contextHandle]||GL.contexts[contextHandle].GLctx.isContextLost()}function reallyNegative(x){return x<0||x===0&&1/x===-Infinity}function convertI32PairToI53(lo,hi){return(lo>>>0)+hi*4294967296}function convertU32PairToI53(lo,hi){return(lo>>>0)+(hi>>>0)*4294967296}function reSign(value,bits){if(value<=0){return value}var half=bits<=32?Math.abs(1<=half&&(bits<=32||value>half)){value=-2*half+value}return value}function unSign(value,bits){if(value>=0){return value}return bits<=32?2*Math.abs(1<>3]);argIndex+=8}else if(type=="i64"){ret=[HEAP32[argIndex>>2],HEAP32[argIndex+4>>2]];argIndex+=8}else{type="i32";ret=HEAP32[argIndex>>2];argIndex+=4}return ret}var ret=[];var curr,next,currArg;while(1){var startTextIndex=textIndex;curr=HEAP8[textIndex>>0];if(curr===0)break;next=HEAP8[textIndex+1>>0];if(curr==37){var flagAlwaysSigned=false;var flagLeftAlign=false;var flagAlternative=false;var flagZeroPad=false;var flagPadSign=false;flagsLoop:while(1){switch(next){case 43:flagAlwaysSigned=true;break;case 45:flagLeftAlign=true;break;case 35:flagAlternative=true;break;case 48:if(flagZeroPad){break flagsLoop}else{flagZeroPad=true;break}case 32:flagPadSign=true;break;default:break flagsLoop}textIndex++;next=HEAP8[textIndex+1>>0]}var width=0;if(next==42){width=getNextArg("i32");textIndex++;next=HEAP8[textIndex+1>>0]}else{while(next>=48&&next<=57){width=width*10+(next-48);textIndex++;next=HEAP8[textIndex+1>>0]}}var precisionSet=false,precision=-1;if(next==46){precision=0;precisionSet=true;textIndex++;next=HEAP8[textIndex+1>>0];if(next==42){precision=getNextArg("i32");textIndex++}else{while(1){var precisionChr=HEAP8[textIndex+1>>0];if(precisionChr<48||precisionChr>57)break;precision=precision*10+(precisionChr-48);textIndex++}}next=HEAP8[textIndex+1>>0]}if(precision<0){precision=6;precisionSet=false}var argSize;switch(String.fromCharCode(next)){case"h":var nextNext=HEAP8[textIndex+2>>0];if(nextNext==104){textIndex++;argSize=1}else{argSize=2}break;case"l":var nextNext=HEAP8[textIndex+2>>0];if(nextNext==108){textIndex++;argSize=8}else{argSize=4}break;case"L":case"q":case"j":argSize=8;break;case"z":case"t":case"I":argSize=4;break;default:argSize=null}if(argSize)textIndex++;next=HEAP8[textIndex+1>>0];switch(String.fromCharCode(next)){case"d":case"i":case"u":case"o":case"x":case"X":case"p":{var signed=next==100||next==105;argSize=argSize||4;currArg=getNextArg("i"+argSize*8);var argText;if(argSize==8){currArg=next==117?convertU32PairToI53(currArg[0],currArg[1]):convertI32PairToI53(currArg[0],currArg[1])}if(argSize<=4){var limit=Math.pow(256,argSize)-1;currArg=(signed?reSign:unSign)(currArg&limit,argSize*8)}var currAbsArg=Math.abs(currArg);var prefix="";if(next==100||next==105){argText=reSign(currArg,8*argSize).toString(10)}else if(next==117){argText=unSign(currArg,8*argSize).toString(10);currArg=Math.abs(currArg)}else if(next==111){argText=(flagAlternative?"0":"")+currAbsArg.toString(8)}else if(next==120||next==88){prefix=flagAlternative&&currArg!=0?"0x":"";if(currArg<0){currArg=-currArg;argText=(currAbsArg-1).toString(16);var buffer=[];for(var i=0;i=0){if(flagAlwaysSigned){prefix="+"+prefix}else if(flagPadSign){prefix=" "+prefix}}if(argText.charAt(0)=="-"){prefix="-"+prefix;argText=argText.substr(1)}while(prefix.length+argText.lengthexponent&&exponent>=-4){next=(next==103?"f":"F").charCodeAt(0);precision-=exponent+1}else{next=(next==103?"e":"E").charCodeAt(0);precision--}effectivePrecision=Math.min(precision,20)}if(next==101||next==69){argText=currArg.toExponential(effectivePrecision);if(/[eE][-+]\d$/.test(argText)){argText=argText.slice(0,-1)+"0"+argText.slice(-1)}}else if(next==102||next==70){argText=currArg.toFixed(effectivePrecision);if(currArg===0&&reallyNegative(currArg)){argText="-"+argText}}var parts=argText.split("e");if(isGeneral&&!flagAlternative){while(parts[0].length>1&&parts[0].includes(".")&&(parts[0].slice(-1)=="0"||parts[0].slice(-1)==".")){parts[0]=parts[0].slice(0,-1)}}else{if(flagAlternative&&argText.indexOf(".")==-1)parts[0]+=".";while(precision>effectivePrecision++)parts[0]+="0"}argText=parts[0]+(parts.length>1?"e"+parts[1]:"");if(next==69)argText=argText.toUpperCase();if(currArg>=0){if(flagAlwaysSigned){argText="+"+argText}else if(flagPadSign){argText=" "+argText}}}while(argText.length>0])}}else{ret=ret.concat(intArrayFromString("(null)".substr(0,argLength),true))}if(flagLeftAlign){while(argLength0){ret.push(32)}if(!flagLeftAlign)ret.push(getNextArg("i8"));break}case"n":{var ptr=getNextArg("i32*");HEAP32[ptr>>2]=ret.length;break}case"%":{ret.push(curr);break}default:{for(var i=startTextIndex;i>0])}}}textIndex+=2}else{ret.push(curr);textIndex+=1}}return ret}function traverseStack(args){if(!args||!args.callee||!args.callee.name){return[null,"",""]}var funstr=args.callee.toString();var funcname=args.callee.name;var str="(";var first=true;for(var i in args){var a=args[i];if(!first){str+=", "}first=false;if(typeof a=="number"||typeof a=="string"){str+=a}else{str+="("+typeof a+")"}}str+=")";var caller=args.callee.caller;args=caller?caller.arguments:[];if(first)str="";return[args,funcname,str]}function _emscripten_get_callstack_js(flags){var callstack=jsStackTrace();var iThisFunc=callstack.lastIndexOf("_emscripten_log");var iThisFunc2=callstack.lastIndexOf("_emscripten_get_callstack");var iNextLine=callstack.indexOf("\n",Math.max(iThisFunc,iThisFunc2))+1;callstack=callstack.slice(iNextLine);if(flags&32){warnOnce("EM_LOG_DEMANGLE is deprecated; ignoring")}if(flags&8&&typeof emscripten_source_map=="undefined"){warnOnce('Source map information is not available, emscripten_log with EM_LOG_C_STACK will be ignored. Build with "--pre-js $EMSCRIPTEN/src/emscripten-source-map.min.js" linker flag to add source map loading to code.');flags^=8;flags|=16}var stack_args=null;if(flags&128){stack_args=traverseStack(arguments);while(stack_args[1].includes("_emscripten_"))stack_args=traverseStack(stack_args[0])}var lines=callstack.split("\n");callstack="";var newFirefoxRe=new RegExp("\\s*(.*?)@(.*?):([0-9]+):([0-9]+)");var firefoxRe=new RegExp("\\s*(.*?)@(.*):(.*)(:(.*))?");var chromeRe=new RegExp("\\s*at (.*?) \\((.*):(.*):(.*)\\)");for(var l in lines){var line=lines[l];var symbolName="";var file="";var lineno=0;var column=0;var parts=chromeRe.exec(line);if(parts&&parts.length==5){symbolName=parts[1];file=parts[2];lineno=parts[3];column=parts[4]}else{parts=newFirefoxRe.exec(line);if(!parts)parts=firefoxRe.exec(line);if(parts&&parts.length>=4){symbolName=parts[1];file=parts[2];lineno=parts[3];column=parts[4]|0}else{callstack+=line+"\n";continue}}var haveSourceMap=false;if(flags&8){var orig=emscripten_source_map.originalPositionFor({line:lineno,column:column});haveSourceMap=orig&&orig.source;if(haveSourceMap){if(flags&64){orig.source=orig.source.substring(orig.source.replace(/\\/g,"/").lastIndexOf("/")+1)}callstack+=" at "+symbolName+" ("+orig.source+":"+orig.line+":"+orig.column+")\n"}}if(flags&16||!haveSourceMap){if(flags&64){file=file.substring(file.replace(/\\/g,"/").lastIndexOf("/")+1)}callstack+=(haveSourceMap?" = "+symbolName:" at "+symbolName)+" ("+file+":"+lineno+":"+column+")\n"}if(flags&128&&stack_args[0]){if(stack_args[1]==symbolName&&stack_args[2].length>0){callstack=callstack.replace(/\s+$/,"");callstack+=" with values: "+stack_args[1]+stack_args[2]+"\n"}stack_args=traverseStack(stack_args[0])}}callstack=callstack.replace(/\s+$/,"");return callstack}function _emscripten_log_js(flags,str){if(flags&24){str=str.replace(/\s+$/,"");str+=(str.length>0?"\n":"")+_emscripten_get_callstack_js(flags)}if(flags&1){if(flags&4){console.error(str)}else if(flags&2){console.warn(str)}else if(flags&512){console.info(str)}else if(flags&256){console.debug(str)}else{console.log(str)}}else if(flags&6){err(str)}else{out(str)}}function _emscripten_log(flags,format,varargs){var result=formatString(format,varargs);var str=UTF8ArrayToString(result,0);_emscripten_log_js(flags,str)}function _emscripten_memcpy_big(dest,src,num){HEAPU8.copyWithin(dest,src,src+num)}function doRequestFullscreen(target,strategy){if(!JSEvents.fullscreenEnabled())return-1;target=findEventTarget(target);if(!target)return-4;if(!target.requestFullscreen&&!target.webkitRequestFullscreen){return-3}var canPerformRequests=JSEvents.canPerformEventHandlerRequests();if(!canPerformRequests){if(strategy.deferUntilInEventHandler){JSEvents.deferCall(_JSEvents_requestFullscreen,1,[target,strategy]);return 1}else{return-2}}return _JSEvents_requestFullscreen(target,strategy)}function _emscripten_request_fullscreen(target,deferUntilInEventHandler){var strategy={scaleMode:0,canvasResolutionScaleMode:0,filteringMode:0,deferUntilInEventHandler:deferUntilInEventHandler,canvasResizedCallbackTargetThread:2};return doRequestFullscreen(target,strategy)}function _emscripten_request_pointerlock(target,deferUntilInEventHandler){target=findEventTarget(target);if(!target)return-4;if(!target.requestPointerLock&&!target.msRequestPointerLock){return-1}var canPerformRequests=JSEvents.canPerformEventHandlerRequests();if(!canPerformRequests){if(deferUntilInEventHandler){JSEvents.deferCall(requestPointerLock,2,[target]);return 1}else{return-2}}return requestPointerLock(target)}function emscripten_realloc_buffer(size){try{wasmMemory.grow(size-buffer.byteLength+65535>>>16);updateGlobalBufferAndViews(wasmMemory.buffer);return 1}catch(e){}}function _emscripten_resize_heap(requestedSize){var oldSize=HEAPU8.length;requestedSize=requestedSize>>>0;var maxHeapSize=_emscripten_get_heap_max();if(requestedSize>maxHeapSize){return false}let alignUp=(x,multiple)=>x+(multiple-x%multiple)%multiple;for(var cutDown=1;cutDown<=4;cutDown*=2){var overGrownHeapSize=oldSize*(1+.2/cutDown);overGrownHeapSize=Math.min(overGrownHeapSize,requestedSize+100663296);var newSize=Math.min(maxHeapSize,alignUp(Math.max(requestedSize,overGrownHeapSize),65536));var replacement=emscripten_realloc_buffer(newSize);if(replacement){return true}}return false}function _emscripten_sample_gamepad_data(){return(JSEvents.lastGamepadState=navigator.getGamepads?navigator.getGamepads():navigator.webkitGetGamepads?navigator.webkitGetGamepads():null)?0:-1}function registerFocusEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.focusEvent)JSEvents.focusEvent=_malloc(256);var focusEventHandlerFunc=function(ev){var e=ev||event;var nodeName=JSEvents.getNodeNameForTarget(e.target);var id=e.target.id?e.target.id:"";var focusEvent=JSEvents.focusEvent;stringToUTF8(nodeName,focusEvent+0,128);stringToUTF8(id,focusEvent+128,128);if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,focusEvent,userData))e.preventDefault()};var eventHandler={target:findEventTarget(target),eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:focusEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_blur_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerFocusEventCallback(target,userData,useCapture,callbackfunc,12,"blur",targetThread);return 0}function _emscripten_set_focus_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerFocusEventCallback(target,userData,useCapture,callbackfunc,13,"focus",targetThread);return 0}function registerFullscreenChangeEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.fullscreenChangeEvent)JSEvents.fullscreenChangeEvent=_malloc(280);var fullscreenChangeEventhandlerFunc=function(ev){var e=ev||event;var fullscreenChangeEvent=JSEvents.fullscreenChangeEvent;fillFullscreenChangeEventData(fullscreenChangeEvent);if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,fullscreenChangeEvent,userData))e.preventDefault()};var eventHandler={target:target,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:fullscreenChangeEventhandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_fullscreenchange_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){if(!JSEvents.fullscreenEnabled())return-1;target=findEventTarget(target);if(!target)return-4;registerFullscreenChangeEventCallback(target,userData,useCapture,callbackfunc,19,"fullscreenchange",targetThread);registerFullscreenChangeEventCallback(target,userData,useCapture,callbackfunc,19,"webkitfullscreenchange",targetThread);return 0}function registerGamepadEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.gamepadEvent)JSEvents.gamepadEvent=_malloc(1432);var gamepadEventHandlerFunc=function(ev){var e=ev||event;var gamepadEvent=JSEvents.gamepadEvent;fillGamepadEventData(gamepadEvent,e["gamepad"]);if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,gamepadEvent,userData))e.preventDefault()};var eventHandler={target:findEventTarget(target),allowsDeferredCalls:true,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:gamepadEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_gamepadconnected_callback_on_thread(userData,useCapture,callbackfunc,targetThread){if(!navigator.getGamepads&&!navigator.webkitGetGamepads)return-1;registerGamepadEventCallback(2,userData,useCapture,callbackfunc,26,"gamepadconnected",targetThread);return 0}function _emscripten_set_gamepaddisconnected_callback_on_thread(userData,useCapture,callbackfunc,targetThread){if(!navigator.getGamepads&&!navigator.webkitGetGamepads)return-1;registerGamepadEventCallback(2,userData,useCapture,callbackfunc,27,"gamepaddisconnected",targetThread);return 0}function _emscripten_set_interval(cb,msecs,userData){return setInterval(function(){callUserCallback(function(){(function(a1){dynCall_vi.apply(null,[cb,a1])})(userData)})},msecs)}function registerKeyEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.keyEvent)JSEvents.keyEvent=_malloc(176);var keyEventHandlerFunc=function(e){var keyEventData=JSEvents.keyEvent;HEAPF64[keyEventData>>3]=e.timeStamp;var idx=keyEventData>>2;HEAP32[idx+2]=e.location;HEAP32[idx+3]=e.ctrlKey;HEAP32[idx+4]=e.shiftKey;HEAP32[idx+5]=e.altKey;HEAP32[idx+6]=e.metaKey;HEAP32[idx+7]=e.repeat;HEAP32[idx+8]=e.charCode;HEAP32[idx+9]=e.keyCode;HEAP32[idx+10]=e.which;stringToUTF8(e.key||"",keyEventData+44,32);stringToUTF8(e.code||"",keyEventData+76,32);stringToUTF8(e.char||"",keyEventData+108,32);stringToUTF8(e.locale||"",keyEventData+140,32);if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,keyEventData,userData))e.preventDefault()};var eventHandler={target:findEventTarget(target),allowsDeferredCalls:true,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:keyEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_keydown_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerKeyEventCallback(target,userData,useCapture,callbackfunc,2,"keydown",targetThread);return 0}function _emscripten_set_keypress_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerKeyEventCallback(target,userData,useCapture,callbackfunc,1,"keypress",targetThread);return 0}function _emscripten_set_keyup_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerKeyEventCallback(target,userData,useCapture,callbackfunc,3,"keyup",targetThread);return 0}function _emscripten_set_main_loop(func,fps,simulateInfiniteLoop){var browserIterationFunc=function(){dynCall_v.call(null,func)};setMainLoop(browserIterationFunc,fps,simulateInfiniteLoop)}function fillMouseEventData(eventStruct,e,target){HEAPF64[eventStruct>>3]=e.timeStamp;var idx=eventStruct>>2;HEAP32[idx+2]=e.screenX;HEAP32[idx+3]=e.screenY;HEAP32[idx+4]=e.clientX;HEAP32[idx+5]=e.clientY;HEAP32[idx+6]=e.ctrlKey;HEAP32[idx+7]=e.shiftKey;HEAP32[idx+8]=e.altKey;HEAP32[idx+9]=e.metaKey;HEAP16[idx*2+20]=e.button;HEAP16[idx*2+21]=e.buttons;HEAP32[idx+11]=e["movementX"];HEAP32[idx+12]=e["movementY"];var rect=getBoundingClientRect(target);HEAP32[idx+13]=e.clientX-rect.left;HEAP32[idx+14]=e.clientY-rect.top}function registerMouseEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.mouseEvent)JSEvents.mouseEvent=_malloc(72);target=findEventTarget(target);var mouseEventHandlerFunc=function(ev){var e=ev||event;fillMouseEventData(JSEvents.mouseEvent,e,target);if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,JSEvents.mouseEvent,userData))e.preventDefault()};var eventHandler={target:target,allowsDeferredCalls:eventTypeString!="mousemove"&&eventTypeString!="mouseenter"&&eventTypeString!="mouseleave",eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:mouseEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_mousedown_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,5,"mousedown",targetThread);return 0}function _emscripten_set_mousemove_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,8,"mousemove",targetThread);return 0}function _emscripten_set_mouseup_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,6,"mouseup",targetThread);return 0}function registerTouchEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.touchEvent)JSEvents.touchEvent=_malloc(1696);target=findEventTarget(target);var touchEventHandlerFunc=function(e){var t,touches={},et=e.touches;for(var i=0;i>2;HEAP32[idx+3]=e.ctrlKey;HEAP32[idx+4]=e.shiftKey;HEAP32[idx+5]=e.altKey;HEAP32[idx+6]=e.metaKey;idx+=7;var targetRect=getBoundingClientRect(target);var numTouches=0;for(var i in touches){var t=touches[i];HEAP32[idx+0]=t.identifier;HEAP32[idx+1]=t.screenX;HEAP32[idx+2]=t.screenY;HEAP32[idx+3]=t.clientX;HEAP32[idx+4]=t.clientY;HEAP32[idx+5]=t.pageX;HEAP32[idx+6]=t.pageY;HEAP32[idx+7]=t.isChanged;HEAP32[idx+8]=t.onTarget;HEAP32[idx+9]=t.clientX-targetRect.left;HEAP32[idx+10]=t.clientY-targetRect.top;idx+=13;if(++numTouches>31){break}}HEAP32[touchEvent+8>>2]=numTouches;if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,touchEvent,userData))e.preventDefault()};var eventHandler={target:target,allowsDeferredCalls:eventTypeString=="touchstart"||eventTypeString=="touchend",eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:touchEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_touchcancel_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerTouchEventCallback(target,userData,useCapture,callbackfunc,25,"touchcancel",targetThread);return 0}function _emscripten_set_touchend_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerTouchEventCallback(target,userData,useCapture,callbackfunc,23,"touchend",targetThread);return 0}function _emscripten_set_touchmove_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerTouchEventCallback(target,userData,useCapture,callbackfunc,24,"touchmove",targetThread);return 0}function _emscripten_set_touchstart_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerTouchEventCallback(target,userData,useCapture,callbackfunc,22,"touchstart",targetThread);return 0}function registerWheelEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.wheelEvent)JSEvents.wheelEvent=_malloc(104);var wheelHandlerFunc=function(ev){var e=ev||event;var wheelEvent=JSEvents.wheelEvent;fillMouseEventData(wheelEvent,e,target);HEAPF64[wheelEvent+72>>3]=e["deltaX"];HEAPF64[wheelEvent+80>>3]=e["deltaY"];HEAPF64[wheelEvent+88>>3]=e["deltaZ"];HEAP32[wheelEvent+96>>2]=e["deltaMode"];if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,wheelEvent,userData))e.preventDefault()};var eventHandler={target:target,allowsDeferredCalls:true,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:wheelHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_wheel_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){target=findEventTarget(target);if(typeof target.onwheel!="undefined"){registerWheelEventCallback(target,userData,useCapture,callbackfunc,9,"wheel",targetThread);return 0}else{return-1}}function __webgl_enable_ANGLE_instanced_arrays(ctx){var ext=ctx.getExtension("ANGLE_instanced_arrays");if(ext){ctx["vertexAttribDivisor"]=function(index,divisor){ext["vertexAttribDivisorANGLE"](index,divisor)};ctx["drawArraysInstanced"]=function(mode,first,count,primcount){ext["drawArraysInstancedANGLE"](mode,first,count,primcount)};ctx["drawElementsInstanced"]=function(mode,count,type,indices,primcount){ext["drawElementsInstancedANGLE"](mode,count,type,indices,primcount)};return 1}}function __webgl_enable_OES_vertex_array_object(ctx){var ext=ctx.getExtension("OES_vertex_array_object");if(ext){ctx["createVertexArray"]=function(){return ext["createVertexArrayOES"]()};ctx["deleteVertexArray"]=function(vao){ext["deleteVertexArrayOES"](vao)};ctx["bindVertexArray"]=function(vao){ext["bindVertexArrayOES"](vao)};ctx["isVertexArray"]=function(vao){return ext["isVertexArrayOES"](vao)};return 1}}function __webgl_enable_WEBGL_draw_buffers(ctx){var ext=ctx.getExtension("WEBGL_draw_buffers");if(ext){ctx["drawBuffers"]=function(n,bufs){ext["drawBuffersWEBGL"](n,bufs)};return 1}}function __webgl_enable_WEBGL_draw_instanced_base_vertex_base_instance(ctx){return!!(ctx.dibvbi=ctx.getExtension("WEBGL_draw_instanced_base_vertex_base_instance"))}function __webgl_enable_WEBGL_multi_draw_instanced_base_vertex_base_instance(ctx){return!!(ctx.mdibvbi=ctx.getExtension("WEBGL_multi_draw_instanced_base_vertex_base_instance"))}function __webgl_enable_WEBGL_multi_draw(ctx){return!!(ctx.multiDrawWebgl=ctx.getExtension("WEBGL_multi_draw"))}var GL={counter:1,buffers:[],mappedBuffers:{},programs:[],framebuffers:[],renderbuffers:[],textures:[],shaders:[],vaos:[],contexts:[],offscreenCanvases:{},queries:[],samplers:[],transformFeedbacks:[],syncs:[],byteSizeByTypeRoot:5120,byteSizeByType:[1,1,2,2,4,4,4,2,3,4,8],stringCache:{},stringiCache:{},unpackAlignment:4,recordError:function recordError(errorCode){if(!GL.lastError){GL.lastError=errorCode}},getNewId:function(table){var ret=GL.counter++;for(var i=table.length;i>1;var quadIndexes=new Uint16Array(numIndexes);var i=0,v=0;while(1){quadIndexes[i++]=v;if(i>=numIndexes)break;quadIndexes[i++]=v+1;if(i>=numIndexes)break;quadIndexes[i++]=v+2;if(i>=numIndexes)break;quadIndexes[i++]=v;if(i>=numIndexes)break;quadIndexes[i++]=v+2;if(i>=numIndexes)break;quadIndexes[i++]=v+3;if(i>=numIndexes)break;v+=4}context.GLctx.bufferData(34963,quadIndexes,35044);context.GLctx.bindBuffer(34963,null)}},getTempVertexBuffer:function getTempVertexBuffer(sizeBytes){var idx=GL.log2ceilLookup(sizeBytes);var ringbuffer=GL.currentContext.tempVertexBuffers1[idx];var nextFreeBufferIndex=GL.currentContext.tempVertexBufferCounters1[idx];GL.currentContext.tempVertexBufferCounters1[idx]=GL.currentContext.tempVertexBufferCounters1[idx]+1&GL.numTempVertexBuffersPerSize-1;var vbo=ringbuffer[nextFreeBufferIndex];if(vbo){return vbo}var prevVBO=GLctx.getParameter(34964);ringbuffer[nextFreeBufferIndex]=GLctx.createBuffer();GLctx.bindBuffer(34962,ringbuffer[nextFreeBufferIndex]);GLctx.bufferData(34962,1<>2]:-1;source+=UTF8ToString(HEAP32[string+i*4>>2],len<0?undefined:len)}return source},calcBufLength:function calcBufLength(size,type,stride,count){if(stride>0){return count*stride}var typeSize=GL.byteSizeByType[type-GL.byteSizeByTypeRoot];return size*typeSize*count},usedTempBuffers:[],preDrawHandleClientVertexAttribBindings:function preDrawHandleClientVertexAttribBindings(count){GL.resetBufferBinding=false;for(var i=0;i1?canvas.getContext("webgl2",webGLContextAttributes):canvas.getContext("webgl",webGLContextAttributes);if(!ctx)return 0;var handle=GL.registerContext(ctx,webGLContextAttributes);return handle},registerContext:function(ctx,webGLContextAttributes){var handle=GL.getNewId(GL.contexts);var context={handle:handle,attributes:webGLContextAttributes,version:webGLContextAttributes.majorVersion,GLctx:ctx};if(ctx.canvas)ctx.canvas.GLctxObject=context;GL.contexts[handle]=context;if(typeof webGLContextAttributes.enableExtensionsByDefault=="undefined"||webGLContextAttributes.enableExtensionsByDefault){GL.initExtensions(context)}context.maxVertexAttribs=context.GLctx.getParameter(34921);context.clientBuffers=[];for(var i=0;i=2){GLctx.disjointTimerQueryExt=GLctx.getExtension("EXT_disjoint_timer_query_webgl2")}if(context.version<2||!GLctx.disjointTimerQueryExt){GLctx.disjointTimerQueryExt=GLctx.getExtension("EXT_disjoint_timer_query")}__webgl_enable_WEBGL_multi_draw(GLctx);var exts=GLctx.getSupportedExtensions()||[];exts.forEach(function(ext){if(!ext.includes("lose_context")&&!ext.includes("debug")){GLctx.getExtension(ext)}})}};var __emscripten_webgl_power_preferences=["default","low-power","high-performance"];function _emscripten_webgl_do_create_context(target,attributes){var a=attributes>>2;var powerPreference=HEAP32[a+(24>>2)];var contextAttributes={"alpha":!!HEAP32[a+(0>>2)],"depth":!!HEAP32[a+(4>>2)],"stencil":!!HEAP32[a+(8>>2)],"antialias":!!HEAP32[a+(12>>2)],"premultipliedAlpha":!!HEAP32[a+(16>>2)],"preserveDrawingBuffer":!!HEAP32[a+(20>>2)],"powerPreference":__emscripten_webgl_power_preferences[powerPreference],"failIfMajorPerformanceCaveat":!!HEAP32[a+(28>>2)],majorVersion:HEAP32[a+(32>>2)],minorVersion:HEAP32[a+(36>>2)],enableExtensionsByDefault:HEAP32[a+(40>>2)],explicitSwapControl:HEAP32[a+(44>>2)],proxyContextToMainThread:HEAP32[a+(48>>2)],renderViaOffscreenBackBuffer:HEAP32[a+(52>>2)]};var canvas=findCanvasEventTarget(target);if(!canvas){return 0}if(contextAttributes.explicitSwapControl){return 0}var contextHandle=GL.createContext(canvas,contextAttributes);return contextHandle}function _emscripten_webgl_create_context(a0,a1){return _emscripten_webgl_do_create_context(a0,a1)}function _emscripten_webgl_destroy_context(contextHandle){if(GL.currentContext==contextHandle)GL.currentContext=0;GL.deleteContext(contextHandle)}function _emscripten_webgl_enable_extension(contextHandle,extension){var context=GL.getContext(contextHandle);var extString=UTF8ToString(extension);if(extString.startsWith("GL_"))extString=extString.substr(3);if(extString=="ANGLE_instanced_arrays")__webgl_enable_ANGLE_instanced_arrays(GLctx);if(extString=="OES_vertex_array_object")__webgl_enable_OES_vertex_array_object(GLctx);if(extString=="WEBGL_draw_buffers")__webgl_enable_WEBGL_draw_buffers(GLctx);if(extString=="WEBGL_draw_instanced_base_vertex_base_instance")__webgl_enable_WEBGL_draw_instanced_base_vertex_base_instance(GLctx);if(extString=="WEBGL_multi_draw_instanced_base_vertex_base_instance")__webgl_enable_WEBGL_multi_draw_instanced_base_vertex_base_instance(GLctx);if(extString=="WEBGL_multi_draw")__webgl_enable_WEBGL_multi_draw(GLctx);var ext=context.GLctx.getExtension(extString);return!!ext}function _emscripten_webgl_do_get_current_context(){return GL.currentContext?GL.currentContext.handle:0}function _emscripten_webgl_get_current_context(){return _emscripten_webgl_do_get_current_context()}function _emscripten_webgl_init_context_attributes(attributes){var a=attributes>>2;for(var i=0;i<56>>2;++i){HEAP32[a+i]=0}HEAP32[a+(0>>2)]=HEAP32[a+(4>>2)]=HEAP32[a+(12>>2)]=HEAP32[a+(16>>2)]=HEAP32[a+(32>>2)]=HEAP32[a+(40>>2)]=1}function _emscripten_webgl_make_context_current(contextHandle){var success=GL.makeContextCurrent(contextHandle);return success?0:-5}var ENV={};function getExecutableName(){return thisProgram||"./this.program"}function getEnvStrings(){if(!getEnvStrings.strings){var lang=(typeof navigator=="object"&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8";var env={"USER":"web_user","LOGNAME":"web_user","PATH":"/","PWD":"/","HOME":"/home/web_user","LANG":lang,"_":getExecutableName()};for(var x in ENV){if(ENV[x]===undefined)delete env[x];else env[x]=ENV[x]}var strings=[];for(var x in env){strings.push(x+"="+env[x])}getEnvStrings.strings=strings}return getEnvStrings.strings}function _environ_get(__environ,environ_buf){var bufSize=0;getEnvStrings().forEach(function(string,i){var ptr=environ_buf+bufSize;HEAP32[__environ+i*4>>2]=ptr;writeAsciiToMemory(string,ptr);bufSize+=string.length+1});return 0}function _environ_sizes_get(penviron_count,penviron_buf_size){var strings=getEnvStrings();HEAP32[penviron_count>>2]=strings.length;var bufSize=0;strings.forEach(function(string){bufSize+=string.length+1});HEAP32[penviron_buf_size>>2]=bufSize;return 0}function _fd_close(fd){try{var stream=SYSCALLS.getStreamFromFD(fd);FS.close(stream);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return e.errno}}function _fd_fdstat_get(fd,pbuf){try{var stream=SYSCALLS.getStreamFromFD(fd);var type=stream.tty?2:FS.isDir(stream.mode)?3:FS.isLink(stream.mode)?7:4;HEAP8[pbuf>>0]=type;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return e.errno}}function _fd_read(fd,iov,iovcnt,pnum){try{var stream=SYSCALLS.getStreamFromFD(fd);var num=SYSCALLS.doReadv(stream,iov,iovcnt);HEAP32[pnum>>2]=num;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return e.errno}}function _fd_seek(fd,offset_low,offset_high,whence,newOffset){try{var stream=SYSCALLS.getStreamFromFD(fd);var HIGH_OFFSET=4294967296;var offset=offset_high*HIGH_OFFSET+(offset_low>>>0);var DOUBLE_LIMIT=9007199254740992;if(offset<=-DOUBLE_LIMIT||offset>=DOUBLE_LIMIT){return-61}FS.llseek(stream,offset,whence);tempI64=[stream.position>>>0,(tempDouble=stream.position,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[newOffset>>2]=tempI64[0],HEAP32[newOffset+4>>2]=tempI64[1];if(stream.getdents&&offset===0&&whence===0)stream.getdents=null;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return e.errno}}function _fd_write(fd,iov,iovcnt,pnum){try{var stream=SYSCALLS.getStreamFromFD(fd);var num=SYSCALLS.doWritev(stream,iov,iovcnt);HEAP32[pnum>>2]=num;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return e.errno}}function _getTempRet0(){return getTempRet0()}function _getaddrinfo(node,service,hint,out){var addr=0;var port=0;var flags=0;var family=0;var type=0;var proto=0;var ai;function allocaddrinfo(family,type,proto,canon,addr,port){var sa,salen,ai;var errno;salen=family===10?28:16;addr=family===10?inetNtop6(addr):inetNtop4(addr);sa=_malloc(salen);errno=writeSockaddr(sa,family,addr,port);assert(!errno);ai=_malloc(32);HEAP32[ai+4>>2]=family;HEAP32[ai+8>>2]=type;HEAP32[ai+12>>2]=proto;HEAP32[ai+24>>2]=canon;HEAP32[ai+20>>2]=sa;if(family===10){HEAP32[ai+16>>2]=28}else{HEAP32[ai+16>>2]=16}HEAP32[ai+28>>2]=0;return ai}if(hint){flags=HEAP32[hint>>2];family=HEAP32[hint+4>>2];type=HEAP32[hint+8>>2];proto=HEAP32[hint+12>>2]}if(type&&!proto){proto=type===2?17:6}if(!type&&proto){type=proto===17?2:1}if(proto===0){proto=6}if(type===0){type=1}if(!node&&!service){return-2}if(flags&~(1|2|4|1024|8|16|32)){return-1}if(hint!==0&&HEAP32[hint>>2]&2&&!node){return-1}if(flags&32){return-2}if(type!==0&&type!==1&&type!==2){return-7}if(family!==0&&family!==2&&family!==10){return-6}if(service){service=UTF8ToString(service);port=parseInt(service,10);if(isNaN(port)){if(flags&1024){return-2}return-8}}if(!node){if(family===0){family=2}if((flags&1)===0){if(family===2){addr=_htonl(2130706433)}else{addr=[0,0,0,1]}}ai=allocaddrinfo(family,type,proto,null,addr,port);HEAP32[out>>2]=ai;return 0}node=UTF8ToString(node);addr=inetPton4(node);if(addr!==null){if(family===0||family===2){family=2}else if(family===10&&flags&8){addr=[0,0,_htonl(65535),addr];family=10}else{return-2}}else{addr=inetPton6(node);if(addr!==null){if(family===0||family===10){family=10}else{return-2}}}if(addr!=null){ai=allocaddrinfo(family,type,proto,node,addr,port);HEAP32[out>>2]=ai;return 0}if(flags&4){return-2}node=DNS.lookup_name(node);addr=inetPton4(node);if(family===0){family=2}else if(family===10){addr=[0,0,_htonl(65535),addr]}ai=allocaddrinfo(family,type,proto,null,addr,port);HEAP32[out>>2]=ai;return 0}function getHostByName(name){var ret=_malloc(20);var nameBuf=_malloc(name.length+1);stringToUTF8(name,nameBuf,name.length+1);HEAP32[ret>>2]=nameBuf;var aliasesBuf=_malloc(4);HEAP32[aliasesBuf>>2]=0;HEAP32[ret+4>>2]=aliasesBuf;var afinet=2;HEAP32[ret+8>>2]=afinet;HEAP32[ret+12>>2]=4;var addrListBuf=_malloc(12);HEAP32[addrListBuf>>2]=addrListBuf+8;HEAP32[addrListBuf+4>>2]=0;HEAP32[addrListBuf+8>>2]=inetPton4(DNS.lookup_name(name));HEAP32[ret+16>>2]=addrListBuf;return ret}function _gethostbyaddr(addr,addrlen,type){if(type!==2){setErrNo(5);return null}addr=HEAP32[addr>>2];var host=inetNtop4(addr);var lookup=DNS.lookup_addr(host);if(lookup){host=lookup}return getHostByName(host)}function _gethostbyname(name){return getHostByName(UTF8ToString(name))}function _getnameinfo(sa,salen,node,nodelen,serv,servlen,flags){var info=readSockaddr(sa,salen);if(info.errno){return-6}var port=info.port;var addr=info.addr;var overflowed=false;if(node&&nodelen){var lookup;if(flags&1||!(lookup=DNS.lookup_addr(addr))){if(flags&8){return-2}}else{addr=lookup}var numBytesWrittenExclNull=stringToUTF8(addr,node,nodelen);if(numBytesWrittenExclNull+1>=nodelen){overflowed=true}}if(serv&&servlen){port=""+port;var numBytesWrittenExclNull=stringToUTF8(port,serv,servlen);if(numBytesWrittenExclNull+1>=servlen){overflowed=true}}if(overflowed){return-12}return 0}function _glActiveTexture(x0){GLctx["activeTexture"](x0)}function _glAttachShader(program,shader){program=GL.programs[program];shader=GL.shaders[shader];program[shader.shaderType]=shader;GLctx.attachShader(program,shader)}function _glBeginQuery(target,id){GLctx["beginQuery"](target,GL.queries[id])}function _glBindAttribLocation(program,index,name){GLctx.bindAttribLocation(GL.programs[program],index,UTF8ToString(name))}function _glBindBuffer(target,buffer){if(target==34962){GLctx.currentArrayBufferBinding=buffer}else if(target==34963){GLctx.currentElementArrayBufferBinding=buffer}if(target==35051){GLctx.currentPixelPackBufferBinding=buffer}else if(target==35052){GLctx.currentPixelUnpackBufferBinding=buffer}GLctx.bindBuffer(target,GL.buffers[buffer])}function _glBindBufferBase(target,index,buffer){GLctx["bindBufferBase"](target,index,GL.buffers[buffer])}function _glBindBufferRange(target,index,buffer,offset,ptrsize){GLctx["bindBufferRange"](target,index,GL.buffers[buffer],offset,ptrsize)}function _glBindFramebuffer(target,framebuffer){GLctx.bindFramebuffer(target,GL.framebuffers[framebuffer])}function _glBindRenderbuffer(target,renderbuffer){GLctx.bindRenderbuffer(target,GL.renderbuffers[renderbuffer])}function _glBindSampler(unit,sampler){GLctx["bindSampler"](unit,GL.samplers[sampler])}function _glBindTexture(target,texture){GLctx.bindTexture(target,GL.textures[texture])}function _glBindVertexArray(vao){GLctx["bindVertexArray"](GL.vaos[vao]);var ibo=GLctx.getParameter(34965);GLctx.currentElementArrayBufferBinding=ibo?ibo.name|0:0}function _glBlendEquation(x0){GLctx["blendEquation"](x0)}function _glBlendEquationSeparate(x0,x1){GLctx["blendEquationSeparate"](x0,x1)}function _glBlendFuncSeparate(x0,x1,x2,x3){GLctx["blendFuncSeparate"](x0,x1,x2,x3)}function _glBlitFramebuffer(x0,x1,x2,x3,x4,x5,x6,x7,x8,x9){GLctx["blitFramebuffer"](x0,x1,x2,x3,x4,x5,x6,x7,x8,x9)}function _glBufferData(target,size,data,usage){if(GL.currentContext.version>=2){if(data){GLctx.bufferData(target,HEAPU8,usage,data,size)}else{GLctx.bufferData(target,size,usage)}}else{GLctx.bufferData(target,data?HEAPU8.subarray(data,data+size):size,usage)}}function _glBufferSubData(target,offset,size,data){if(GL.currentContext.version>=2){GLctx.bufferSubData(target,offset,HEAPU8,data,size);return}GLctx.bufferSubData(target,offset,HEAPU8.subarray(data,data+size))}function _glCheckFramebufferStatus(x0){return GLctx["checkFramebufferStatus"](x0)}function _glClear(x0){GLctx["clear"](x0)}function _glClearBufferfi(x0,x1,x2,x3){GLctx["clearBufferfi"](x0,x1,x2,x3)}function _glClearBufferfv(buffer,drawbuffer,value){GLctx["clearBufferfv"](buffer,drawbuffer,HEAPF32,value>>2)}function _glClearBufferuiv(buffer,drawbuffer,value){GLctx["clearBufferuiv"](buffer,drawbuffer,HEAPU32,value>>2)}function _glClearColor(x0,x1,x2,x3){GLctx["clearColor"](x0,x1,x2,x3)}function _glClearDepthf(x0){GLctx["clearDepth"](x0)}function _glClearStencil(x0){GLctx["clearStencil"](x0)}function _glClientWaitSync(sync,flags,timeoutLo,timeoutHi){return GLctx.clientWaitSync(GL.syncs[sync],flags,convertI32PairToI53(timeoutLo,timeoutHi))}function _glColorMask(red,green,blue,alpha){GLctx.colorMask(!!red,!!green,!!blue,!!alpha)}function _glCompileShader(shader){GLctx.compileShader(GL.shaders[shader])}function _glCompressedTexImage2D(target,level,internalFormat,width,height,border,imageSize,data){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding){GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,imageSize,data)}else{GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,HEAPU8,data,imageSize)}return}GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,data?HEAPU8.subarray(data,data+imageSize):null)}function _glCompressedTexImage3D(target,level,internalFormat,width,height,depth,border,imageSize,data){if(GLctx.currentPixelUnpackBufferBinding){GLctx["compressedTexImage3D"](target,level,internalFormat,width,height,depth,border,imageSize,data)}else{GLctx["compressedTexImage3D"](target,level,internalFormat,width,height,depth,border,HEAPU8,data,imageSize)}}function _glCompressedTexSubImage2D(target,level,xoffset,yoffset,width,height,format,imageSize,data){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding){GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,imageSize,data)}else{GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,HEAPU8,data,imageSize)}return}GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,data?HEAPU8.subarray(data,data+imageSize):null)}function _glCompressedTexSubImage3D(target,level,xoffset,yoffset,zoffset,width,height,depth,format,imageSize,data){if(GLctx.currentPixelUnpackBufferBinding){GLctx["compressedTexSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,imageSize,data)}else{GLctx["compressedTexSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,HEAPU8,data,imageSize)}}function _glCopyBufferSubData(x0,x1,x2,x3,x4){GLctx["copyBufferSubData"](x0,x1,x2,x3,x4)}function _glCopyTexImage2D(x0,x1,x2,x3,x4,x5,x6,x7){GLctx["copyTexImage2D"](x0,x1,x2,x3,x4,x5,x6,x7)}function _glCopyTexSubImage2D(x0,x1,x2,x3,x4,x5,x6,x7){GLctx["copyTexSubImage2D"](x0,x1,x2,x3,x4,x5,x6,x7)}function _glCreateProgram(){var id=GL.getNewId(GL.programs);var program=GLctx.createProgram();program.name=id;program.maxUniformLength=program.maxAttributeLength=program.maxUniformBlockNameLength=0;program.uniformIdCounter=1;GL.programs[id]=program;return id}function _glCreateShader(shaderType){var id=GL.getNewId(GL.shaders);GL.shaders[id]=GLctx.createShader(shaderType);GL.shaders[id].shaderType=shaderType&1?"vs":"fs";return id}function _glCullFace(x0){GLctx["cullFace"](x0)}function _glDeleteBuffers(n,buffers){for(var i=0;i>2];var buffer=GL.buffers[id];if(!buffer)continue;GLctx.deleteBuffer(buffer);buffer.name=0;GL.buffers[id]=null;if(id==GLctx.currentArrayBufferBinding)GLctx.currentArrayBufferBinding=0;if(id==GLctx.currentElementArrayBufferBinding)GLctx.currentElementArrayBufferBinding=0;if(id==GLctx.currentPixelPackBufferBinding)GLctx.currentPixelPackBufferBinding=0;if(id==GLctx.currentPixelUnpackBufferBinding)GLctx.currentPixelUnpackBufferBinding=0}}function _glDeleteFramebuffers(n,framebuffers){for(var i=0;i>2];var framebuffer=GL.framebuffers[id];if(!framebuffer)continue;GLctx.deleteFramebuffer(framebuffer);framebuffer.name=0;GL.framebuffers[id]=null}}function _glDeleteProgram(id){if(!id)return;var program=GL.programs[id];if(!program){GL.recordError(1281);return}GLctx.deleteProgram(program);program.name=0;GL.programs[id]=null}function _glDeleteQueries(n,ids){for(var i=0;i>2];var query=GL.queries[id];if(!query)continue;GLctx["deleteQuery"](query);GL.queries[id]=null}}function _glDeleteRenderbuffers(n,renderbuffers){for(var i=0;i>2];var renderbuffer=GL.renderbuffers[id];if(!renderbuffer)continue;GLctx.deleteRenderbuffer(renderbuffer);renderbuffer.name=0;GL.renderbuffers[id]=null}}function _glDeleteSamplers(n,samplers){for(var i=0;i>2];var sampler=GL.samplers[id];if(!sampler)continue;GLctx["deleteSampler"](sampler);sampler.name=0;GL.samplers[id]=null}}function _glDeleteShader(id){if(!id)return;var shader=GL.shaders[id];if(!shader){GL.recordError(1281);return}GLctx.deleteShader(shader);GL.shaders[id]=null}function _glDeleteSync(id){if(!id)return;var sync=GL.syncs[id];if(!sync){GL.recordError(1281);return}GLctx.deleteSync(sync);sync.name=0;GL.syncs[id]=null}function _glDeleteTextures(n,textures){for(var i=0;i>2];var texture=GL.textures[id];if(!texture)continue;GLctx.deleteTexture(texture);texture.name=0;GL.textures[id]=null}}function _glDeleteVertexArrays(n,vaos){for(var i=0;i>2];GLctx["deleteVertexArray"](GL.vaos[id]);GL.vaos[id]=null}}function _glDepthFunc(x0){GLctx["depthFunc"](x0)}function _glDepthMask(flag){GLctx.depthMask(!!flag)}function _glDetachShader(program,shader){GLctx.detachShader(GL.programs[program],GL.shaders[shader])}function _glDisable(x0){GLctx["disable"](x0)}function _glDisableVertexAttribArray(index){var cb=GL.currentContext.clientBuffers[index];cb.enabled=false;GLctx.disableVertexAttribArray(index)}function _glDrawArrays(mode,first,count){GL.preDrawHandleClientVertexAttribBindings(first+count);GLctx.drawArrays(mode,first,count);GL.postDrawHandleClientVertexAttribBindings()}function _glDrawArraysInstanced(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}var tempFixedLengthArray=[];function _glDrawBuffers(n,bufs){var bufArray=tempFixedLengthArray[n];for(var i=0;i>2]}GLctx["drawBuffers"](bufArray)}function _glDrawElements(mode,count,type,indices){var buf;if(!GLctx.currentElementArrayBufferBinding){var size=GL.calcBufLength(1,type,0,count);buf=GL.getTempIndexBuffer(size);GLctx.bindBuffer(34963,buf);GLctx.bufferSubData(34963,0,HEAPU8.subarray(indices,indices+size));indices=0}GL.preDrawHandleClientVertexAttribBindings(count);GLctx.drawElements(mode,count,type,indices);GL.postDrawHandleClientVertexAttribBindings(count);if(!GLctx.currentElementArrayBufferBinding){GLctx.bindBuffer(34963,null)}}function _glDrawElementsInstanced(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}function _glEnable(x0){GLctx["enable"](x0)}function _glEnableVertexAttribArray(index){var cb=GL.currentContext.clientBuffers[index];cb.enabled=true;GLctx.enableVertexAttribArray(index)}function _glEndQuery(x0){GLctx["endQuery"](x0)}function _glFenceSync(condition,flags){var sync=GLctx.fenceSync(condition,flags);if(sync){var id=GL.getNewId(GL.syncs);sync.name=id;GL.syncs[id]=sync;return id}else{return 0}}function _glFinish(){GLctx["finish"]()}function _glFlush(){GLctx["flush"]()}function emscriptenWebGLGetBufferBinding(target){switch(target){case 34962:target=34964;break;case 34963:target=34965;break;case 35051:target=35053;break;case 35052:target=35055;break;case 35982:target=35983;break;case 36662:target=36662;break;case 36663:target=36663;break;case 35345:target=35368;break}var buffer=GLctx.getParameter(target);if(buffer)return buffer.name|0;else return 0}function emscriptenWebGLValidateMapBufferTarget(target){switch(target){case 34962:case 34963:case 36662:case 36663:case 35051:case 35052:case 35882:case 35982:case 35345:return true;default:return false}}function _glFlushMappedBufferRange(target,offset,length){if(!emscriptenWebGLValidateMapBufferTarget(target)){GL.recordError(1280);err("GL_INVALID_ENUM in glFlushMappedBufferRange");return}var mapping=GL.mappedBuffers[emscriptenWebGLGetBufferBinding(target)];if(!mapping){GL.recordError(1282);err("buffer was never mapped in glFlushMappedBufferRange");return}if(!(mapping.access&16)){GL.recordError(1282);err("buffer was not mapped with GL_MAP_FLUSH_EXPLICIT_BIT in glFlushMappedBufferRange");return}if(offset<0||length<0||offset+length>mapping.length){GL.recordError(1281);err("invalid range in glFlushMappedBufferRange");return}GLctx.bufferSubData(target,mapping.offset,HEAPU8.subarray(mapping.mem+offset,mapping.mem+offset+length))}function _glFramebufferRenderbuffer(target,attachment,renderbuffertarget,renderbuffer){GLctx.framebufferRenderbuffer(target,attachment,renderbuffertarget,GL.renderbuffers[renderbuffer])}function _glFramebufferTexture2D(target,attachment,textarget,texture,level){GLctx.framebufferTexture2D(target,attachment,textarget,GL.textures[texture],level)}function _glFramebufferTextureLayer(target,attachment,texture,level,layer){GLctx.framebufferTextureLayer(target,attachment,GL.textures[texture],level,layer)}function _glFrontFace(x0){GLctx["frontFace"](x0)}function __glGenObject(n,buffers,createFunction,objectTable){for(var i=0;i>2]=id}}function _glGenBuffers(n,buffers){__glGenObject(n,buffers,"createBuffer",GL.buffers)}function _glGenFramebuffers(n,ids){__glGenObject(n,ids,"createFramebuffer",GL.framebuffers)}function _glGenQueries(n,ids){__glGenObject(n,ids,"createQuery",GL.queries)}function _glGenRenderbuffers(n,renderbuffers){__glGenObject(n,renderbuffers,"createRenderbuffer",GL.renderbuffers)}function _glGenSamplers(n,samplers){__glGenObject(n,samplers,"createSampler",GL.samplers)}function _glGenTextures(n,textures){__glGenObject(n,textures,"createTexture",GL.textures)}function _glGenVertexArrays(n,arrays){__glGenObject(n,arrays,"createVertexArray",GL.vaos)}function _glGenerateMipmap(x0){GLctx["generateMipmap"](x0)}function __glGetActiveAttribOrUniform(funcName,program,index,bufSize,length,size,type,name){program=GL.programs[program];var info=GLctx[funcName](program,index);if(info){var numBytesWrittenExclNull=name&&stringToUTF8(info.name,name,bufSize);if(length)HEAP32[length>>2]=numBytesWrittenExclNull;if(size)HEAP32[size>>2]=info.size;if(type)HEAP32[type>>2]=info.type}}function _glGetActiveAttrib(program,index,bufSize,length,size,type,name){__glGetActiveAttribOrUniform("getActiveAttrib",program,index,bufSize,length,size,type,name)}function _glGetActiveUniform(program,index,bufSize,length,size,type,name){__glGetActiveAttribOrUniform("getActiveUniform",program,index,bufSize,length,size,type,name)}function _glGetActiveUniformBlockName(program,uniformBlockIndex,bufSize,length,uniformBlockName){program=GL.programs[program];var result=GLctx["getActiveUniformBlockName"](program,uniformBlockIndex);if(!result)return;if(uniformBlockName&&bufSize>0){var numBytesWrittenExclNull=stringToUTF8(result,uniformBlockName,bufSize);if(length)HEAP32[length>>2]=numBytesWrittenExclNull}else{if(length)HEAP32[length>>2]=0}}function _glGetActiveUniformBlockiv(program,uniformBlockIndex,pname,params){if(!params){GL.recordError(1281);return}program=GL.programs[program];if(pname==35393){var name=GLctx["getActiveUniformBlockName"](program,uniformBlockIndex);HEAP32[params>>2]=name.length+1;return}var result=GLctx["getActiveUniformBlockParameter"](program,uniformBlockIndex,pname);if(result===null)return;if(pname==35395){for(var i=0;i>2]=result[i]}}else{HEAP32[params>>2]=result}}function _glGetActiveUniformsiv(program,uniformCount,uniformIndices,pname,params){if(!params){GL.recordError(1281);return}if(uniformCount>0&&uniformIndices==0){GL.recordError(1281);return}program=GL.programs[program];var ids=[];for(var i=0;i>2])}var result=GLctx["getActiveUniforms"](program,ids,pname);if(!result)return;var len=result.length;for(var i=0;i>2]=result[i]}}function _glGetAttribLocation(program,name){return GLctx.getAttribLocation(GL.programs[program],UTF8ToString(name))}function _glGetBufferSubData(target,offset,size,data){if(!data){GL.recordError(1281);return}GLctx["getBufferSubData"](target,offset,HEAPU8,data,size)}function _glGetError(){var error=GLctx.getError()||GL.lastError;GL.lastError=0;return error}function _glGetFramebufferAttachmentParameteriv(target,attachment,pname,params){var result=GLctx.getFramebufferAttachmentParameter(target,attachment,pname);if(result instanceof WebGLRenderbuffer||result instanceof WebGLTexture){result=result.name|0}HEAP32[params>>2]=result}function writeI53ToI64(ptr,num){HEAPU32[ptr>>2]=num;HEAPU32[ptr+4>>2]=(num-HEAPU32[ptr>>2])/4294967296}function emscriptenWebGLGetIndexed(target,index,data,type){if(!data){GL.recordError(1281);return}var result=GLctx["getIndexedParameter"](target,index);var ret;switch(typeof result){case"boolean":ret=result?1:0;break;case"number":ret=result;break;case"object":if(result===null){switch(target){case 35983:case 35368:ret=0;break;default:{GL.recordError(1280);return}}}else if(result instanceof WebGLBuffer){ret=result.name|0}else{GL.recordError(1280);return}break;default:GL.recordError(1280);return}switch(type){case 1:writeI53ToI64(data,ret);break;case 0:HEAP32[data>>2]=ret;break;case 2:HEAPF32[data>>2]=ret;break;case 4:HEAP8[data>>0]=ret?1:0;break;default:throw"internal emscriptenWebGLGetIndexed() error, bad type: "+type}}function _glGetIntegeri_v(target,index,data){emscriptenWebGLGetIndexed(target,index,data,0)}function emscriptenWebGLGet(name_,p,type){if(!p){GL.recordError(1281);return}var ret=undefined;switch(name_){case 36346:ret=1;break;case 36344:if(type!=0&&type!=1){GL.recordError(1280)}return;case 34814:case 36345:ret=0;break;case 34466:var formats=GLctx.getParameter(34467);ret=formats?formats.length:0;break;case 33390:ret=1048576;break;case 33309:if(GL.currentContext.version<2){GL.recordError(1282);return}var exts=GLctx.getSupportedExtensions()||[];ret=2*exts.length;break;case 33307:case 33308:if(GL.currentContext.version<2){GL.recordError(1280);return}ret=name_==33307?3:0;break}if(ret===undefined){var result=GLctx.getParameter(name_);switch(typeof result){case"number":ret=result;break;case"boolean":ret=result?1:0;break;case"string":GL.recordError(1280);return;case"object":if(result===null){switch(name_){case 34964:case 35725:case 34965:case 36006:case 36007:case 32873:case 34229:case 36662:case 36663:case 35053:case 35055:case 36010:case 35097:case 35869:case 32874:case 36389:case 35983:case 35368:case 34068:{ret=0;break}default:{GL.recordError(1280);return}}}else if(result instanceof Float32Array||result instanceof Uint32Array||result instanceof Int32Array||result instanceof Array){for(var i=0;i>2]=result[i];break;case 2:HEAPF32[p+i*4>>2]=result[i];break;case 4:HEAP8[p+i>>0]=result[i]?1:0;break}}return}else{try{ret=result.name|0}catch(e){GL.recordError(1280);err("GL_INVALID_ENUM in glGet"+type+"v: Unknown object returned from WebGL getParameter("+name_+")! (error: "+e+")");return}}break;default:GL.recordError(1280);err("GL_INVALID_ENUM in glGet"+type+"v: Native code calling glGet"+type+"v("+name_+") and it returns "+result+" of type "+typeof result+"!");return}}switch(type){case 1:writeI53ToI64(p,ret);break;case 0:HEAP32[p>>2]=ret;break;case 2:HEAPF32[p>>2]=ret;break;case 4:HEAP8[p>>0]=ret?1:0;break}}function _glGetIntegerv(name_,p){emscriptenWebGLGet(name_,p,0)}function _glGetInternalformativ(target,internalformat,pname,bufSize,params){if(bufSize<0){GL.recordError(1281);return}if(!params){GL.recordError(1281);return}var ret=GLctx["getInternalformatParameter"](target,internalformat,pname);if(ret===null)return;for(var i=0;i>2]=ret[i]}}function _glGetProgramBinary(program,bufSize,length,binaryFormat,binary){GL.recordError(1282)}function _glGetProgramInfoLog(program,maxLength,length,infoLog){var log=GLctx.getProgramInfoLog(GL.programs[program]);if(log===null)log="(unknown error)";var numBytesWrittenExclNull=maxLength>0&&infoLog?stringToUTF8(log,infoLog,maxLength):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}function _glGetProgramiv(program,pname,p){if(!p){GL.recordError(1281);return}if(program>=GL.counter){GL.recordError(1281);return}program=GL.programs[program];if(pname==35716){var log=GLctx.getProgramInfoLog(program);if(log===null)log="(unknown error)";HEAP32[p>>2]=log.length+1}else if(pname==35719){if(!program.maxUniformLength){for(var i=0;i>2]=program.maxUniformLength}else if(pname==35722){if(!program.maxAttributeLength){for(var i=0;i>2]=program.maxAttributeLength}else if(pname==35381){if(!program.maxUniformBlockNameLength){for(var i=0;i>2]=program.maxUniformBlockNameLength}else{HEAP32[p>>2]=GLctx.getProgramParameter(program,pname)}}function _glGetQueryObjectuiv(id,pname,params){if(!params){GL.recordError(1281);return}var query=GL.queries[id];var param=GLctx["getQueryParameter"](query,pname);var ret;if(typeof param=="boolean"){ret=param?1:0}else{ret=param}HEAP32[params>>2]=ret}function _glGetQueryiv(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx["getQuery"](target,pname)}function _glGetRenderbufferParameteriv(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx.getRenderbufferParameter(target,pname)}function _glGetShaderInfoLog(shader,maxLength,length,infoLog){var log=GLctx.getShaderInfoLog(GL.shaders[shader]);if(log===null)log="(unknown error)";var numBytesWrittenExclNull=maxLength>0&&infoLog?stringToUTF8(log,infoLog,maxLength):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}function _glGetShaderPrecisionFormat(shaderType,precisionType,range,precision){var result=GLctx.getShaderPrecisionFormat(shaderType,precisionType);HEAP32[range>>2]=result.rangeMin;HEAP32[range+4>>2]=result.rangeMax;HEAP32[precision>>2]=result.precision}function _glGetShaderSource(shader,bufSize,length,source){var result=GLctx.getShaderSource(GL.shaders[shader]);if(!result)return;var numBytesWrittenExclNull=bufSize>0&&source?stringToUTF8(result,source,bufSize):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}function _glGetShaderiv(shader,pname,p){if(!p){GL.recordError(1281);return}if(pname==35716){var log=GLctx.getShaderInfoLog(GL.shaders[shader]);if(log===null)log="(unknown error)";var logLength=log?log.length+1:0;HEAP32[p>>2]=logLength}else if(pname==35720){var source=GLctx.getShaderSource(GL.shaders[shader]);var sourceLength=source?source.length+1:0;HEAP32[p>>2]=sourceLength}else{HEAP32[p>>2]=GLctx.getShaderParameter(GL.shaders[shader],pname)}}function _glGetString(name_){var ret=GL.stringCache[name_];if(!ret){switch(name_){case 7939:var exts=GLctx.getSupportedExtensions()||[];exts=exts.concat(exts.map(function(e){return"GL_"+e}));ret=stringToNewUTF8(exts.join(" "));break;case 7936:case 7937:case 37445:case 37446:var s=GLctx.getParameter(name_);if(!s){GL.recordError(1280)}ret=s&&stringToNewUTF8(s);break;case 7938:var glVersion=GLctx.getParameter(7938);if(GL.currentContext.version>=2)glVersion="OpenGL ES 3.0 ("+glVersion+")";else{glVersion="OpenGL ES 2.0 ("+glVersion+")"}ret=stringToNewUTF8(glVersion);break;case 35724:var glslVersion=GLctx.getParameter(35724);var ver_re=/^WebGL GLSL ES ([0-9]\.[0-9][0-9]?)(?:$| .*)/;var ver_num=glslVersion.match(ver_re);if(ver_num!==null){if(ver_num[1].length==3)ver_num[1]=ver_num[1]+"0";glslVersion="OpenGL ES GLSL ES "+ver_num[1]+" ("+glslVersion+")"}ret=stringToNewUTF8(glslVersion);break;default:GL.recordError(1280)}GL.stringCache[name_]=ret}return ret}function _glGetStringi(name,index){if(GL.currentContext.version<2){GL.recordError(1282);return 0}var stringiCache=GL.stringiCache[name];if(stringiCache){if(index<0||index>=stringiCache.length){GL.recordError(1281);return 0}return stringiCache[index]}switch(name){case 7939:var exts=GLctx.getSupportedExtensions()||[];exts=exts.concat(exts.map(function(e){return"GL_"+e}));exts=exts.map(function(e){return stringToNewUTF8(e)});stringiCache=GL.stringiCache[name]=exts;if(index<0||index>=stringiCache.length){GL.recordError(1281);return 0}return stringiCache[index];default:GL.recordError(1280);return 0}}function _glGetTexParameteriv(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx.getTexParameter(target,pname)}function _glGetUniformBlockIndex(program,uniformBlockName){return GLctx["getUniformBlockIndex"](GL.programs[program],UTF8ToString(uniformBlockName))}function _glGetUniformIndices(program,uniformCount,uniformNames,uniformIndices){if(!uniformIndices){GL.recordError(1281);return}if(uniformCount>0&&(uniformNames==0||uniformIndices==0)){GL.recordError(1281);return}program=GL.programs[program];var names=[];for(var i=0;i>2]));var result=GLctx["getUniformIndices"](program,names);if(!result)return;var len=result.length;for(var i=0;i>2]=result[i]}}function webglGetLeftBracePos(name){return name.slice(-1)=="]"&&name.lastIndexOf("[")}function webglPrepareUniformLocationsBeforeFirstUse(program){var uniformLocsById=program.uniformLocsById,uniformSizeAndIdsByName=program.uniformSizeAndIdsByName,i,j;if(!uniformLocsById){program.uniformLocsById=uniformLocsById={};program.uniformArrayNamesById={};for(i=0;i0?nm.slice(0,lb):nm;var id=uniformSizeAndIdsByName[arrayName]?uniformSizeAndIdsByName[arrayName][1]:program.uniformIdCounter;program.uniformIdCounter=Math.max(id+sz,program.uniformIdCounter);uniformSizeAndIdsByName[arrayName]=[sz,id];for(j=0;j0){arrayIndex=jstoi_q(name.slice(leftBrace+1))>>>0;uniformBaseName=name.slice(0,leftBrace)}var sizeAndId=program.uniformSizeAndIdsByName[uniformBaseName];if(sizeAndId&&arrayIndex0?"["+webglLoc+"]":""))}return webglLoc}else{GL.recordError(1282)}}function emscriptenWebGLGetUniform(program,location,params,type){if(!params){GL.recordError(1281);return}program=GL.programs[program];webglPrepareUniformLocationsBeforeFirstUse(program);var data=GLctx.getUniform(program,webglGetUniformLocation(location));if(typeof data=="number"||typeof data=="boolean"){switch(type){case 0:HEAP32[params>>2]=data;break;case 2:HEAPF32[params>>2]=data;break}}else{for(var i=0;i>2]=data[i];break;case 2:HEAPF32[params+i*4>>2]=data[i];break}}}}function _glGetUniformiv(program,location,params){emscriptenWebGLGetUniform(program,location,params,0)}function emscriptenWebGLGetVertexAttrib(index,pname,params,type){if(!params){GL.recordError(1281);return}if(GL.currentContext.clientBuffers[index].enabled){err("glGetVertexAttrib*v on client-side array: not supported, bad data returned")}var data=GLctx.getVertexAttrib(index,pname);if(pname==34975){HEAP32[params>>2]=data&&data["name"]}else if(typeof data=="number"||typeof data=="boolean"){switch(type){case 0:HEAP32[params>>2]=data;break;case 2:HEAPF32[params>>2]=data;break;case 5:HEAP32[params>>2]=Math.fround(data);break}}else{for(var i=0;i>2]=data[i];break;case 2:HEAPF32[params+i*4>>2]=data[i];break;case 5:HEAP32[params+i*4>>2]=Math.fround(data[i]);break}}}}function _glGetVertexAttribiv(index,pname,params){emscriptenWebGLGetVertexAttrib(index,pname,params,5)}function _glInvalidateFramebuffer(target,numAttachments,attachments){var list=tempFixedLengthArray[numAttachments];for(var i=0;i>2]}GLctx["invalidateFramebuffer"](target,list)}function _glIsEnabled(x0){return GLctx["isEnabled"](x0)}function _glIsVertexArray(array){var vao=GL.vaos[array];if(!vao)return 0;return GLctx["isVertexArray"](vao)}function _glLinkProgram(program){program=GL.programs[program];GLctx.linkProgram(program);program.uniformLocsById=0;program.uniformSizeAndIdsByName={};[program["vs"],program["fs"]].forEach(function(s){Object.keys(s.explicitUniformLocations).forEach(function(shaderLocation){var loc=s.explicitUniformLocations[shaderLocation];program.uniformSizeAndIdsByName[shaderLocation]=[1,loc];program.uniformIdCounter=Math.max(program.uniformIdCounter,loc+1)})});function copyKeys(dst,src){Object.keys(src).forEach(function(key){dst[key]=src[key]})}program.explicitUniformBindings={};program.explicitSamplerBindings={};[program["vs"],program["fs"]].forEach(function(s){copyKeys(program.explicitUniformBindings,s.explicitUniformBindings);copyKeys(program.explicitSamplerBindings,s.explicitSamplerBindings)});program.explicitProgramBindingsApplied=0}function _glMapBufferRange(target,offset,length,access){if(access!=26&&access!=10){err("glMapBufferRange is only supported when access is MAP_WRITE|INVALIDATE_BUFFER");return 0}if(!emscriptenWebGLValidateMapBufferTarget(target)){GL.recordError(1280);err("GL_INVALID_ENUM in glMapBufferRange");return 0}var mem=_malloc(length);if(!mem)return 0;GL.mappedBuffers[emscriptenWebGLGetBufferBinding(target)]={offset:offset,length:length,mem:mem,access:access};return mem}function _glPixelStorei(pname,param){if(pname==3317){GL.unpackAlignment=param}GLctx.pixelStorei(pname,param)}function _glPolygonOffset(x0,x1){GLctx["polygonOffset"](x0,x1)}function _glProgramBinary(program,binaryFormat,binary,length){GL.recordError(1280)}function _glProgramParameteri(program,pname,value){GL.recordError(1280)}function _glReadBuffer(x0){GLctx["readBuffer"](x0)}function computeUnpackAlignedImageSize(width,height,sizePerPixel,alignment){function roundedToNextMultipleOf(x,y){return x+y-1&-y}var plainRowSize=width*sizePerPixel;var alignedRowSize=roundedToNextMultipleOf(plainRowSize,alignment);return height*alignedRowSize}function __colorChannelsInGlTextureFormat(format){var colorChannels={5:3,6:4,8:2,29502:3,29504:4,26917:2,26918:2,29846:3,29847:4};return colorChannels[format-6402]||1}function heapObjectForWebGLType(type){type-=5120;if(type==0)return HEAP8;if(type==1)return HEAPU8;if(type==2)return HEAP16;if(type==4)return HEAP32;if(type==6)return HEAPF32;if(type==5||type==28922||type==28520||type==30779||type==30782)return HEAPU32;return HEAPU16}function heapAccessShiftForWebGLHeap(heap){return 31-Math.clz32(heap.BYTES_PER_ELEMENT)}function emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,internalFormat){var heap=heapObjectForWebGLType(type);var shift=heapAccessShiftForWebGLHeap(heap);var byteSize=1<>shift,pixels+bytes>>shift)}function _glReadPixels(x,y,width,height,format,type,pixels){if(GL.currentContext.version>=2){if(GLctx.currentPixelPackBufferBinding){GLctx.readPixels(x,y,width,height,format,type,pixels)}else{var heap=heapObjectForWebGLType(type);GLctx.readPixels(x,y,width,height,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}return}var pixelData=emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,format);if(!pixelData){GL.recordError(1280);return}GLctx.readPixels(x,y,width,height,format,type,pixelData)}function _glRenderbufferStorage(x0,x1,x2,x3){GLctx["renderbufferStorage"](x0,x1,x2,x3)}function _glRenderbufferStorageMultisample(x0,x1,x2,x3,x4){GLctx["renderbufferStorageMultisample"](x0,x1,x2,x3,x4)}function _glSamplerParameteri(sampler,pname,param){GLctx["samplerParameteri"](GL.samplers[sampler],pname,param)}function _glScissor(x0,x1,x2,x3){GLctx["scissor"](x0,x1,x2,x3)}function find_closing_parens_index(arr,i,opening="(",closing=")"){for(var nesting=0;i{return defs[args[0]]?1:0});function isWhitespace(str,i){return!(str.charCodeAt(i)>32)}function nextWhitespace(str,i){while(!isWhitespace(str,i))++i;return i}function classifyChar(str,idx){var cc=str.charCodeAt(idx);if(cc>32){if(cc<48)return 1;if(cc<58)return 2;if(cc<65)return 1;if(cc<91||cc==95)return 3;if(cc<97)return 1;if(cc<123)return 3;return 1}return cc<33?0:4}function tokenize(exprString,keepWhitespace){var out=[],len=exprString.length;for(var i=0;i<=len;++i){var kind=classifyChar(exprString,i);if(kind==2||kind==3){for(var j=i+1;j<=len;++j){var kind2=classifyChar(exprString,j);if(kind2!=kind&&(kind2!=2||kind!=3)){out.push(exprString.substring(i,j));i=j-1;break}}}else if(kind==1){var op2=exprString.substr(i,2);if(["<=",">=","==","!=","&&","||"].includes(op2)){out.push(op2);++i}else{out.push(exprString[i])}}}return out}function expandMacros(str,lineStart,lineEnd){if(lineEnd===undefined)lineEnd=str.length;var len=str.length;var out="";for(var i=lineStart;i1||typeof tokens[0]!="function"){tokens=function(tokens){var i,j,p,operatorAndPriority=-2;for(j=0;j",">=","==","!=","&&","||","("].indexOf(tokens[j]))>operatorAndPriority){i=j;operatorAndPriority=p}}if(operatorAndPriority==13){var j=find_closing_parens_index(tokens,i);if(j){tokens.splice(i,j+1-i,buildExprTree(tokens.slice(i+1,j)));return tokens}}if(operatorAndPriority==4){i=tokens.lastIndexOf("!");var innerExpr=buildExprTree(tokens.slice(i+1,i+2));tokens.splice(i,2,function(){return!innerExpr()});return tokens}if(operatorAndPriority>=0){var left=buildExprTree(tokens.slice(0,i));var right=buildExprTree(tokens.slice(i+1));switch(tokens[i]){case"&&":return[function(){return left()&&right()}];case"||":return[function(){return left()||right()}];case"==":return[function(){return left()==right()}];case"!=":return[function(){return left()!=right()}];case"<":return[function(){return left()":return[function(){return left()>right()}];case">=":return[function(){return left()>=right()}];case"+":return[function(){return left()+right()}];case"-":return[function(){return left()-right()}];case"*":return[function(){return left()*right()}];case"/":return[function(){return Math.floor(left()/right())}]}}var num=jstoi_q(tokens[i]);return[function(){return num}]}(tokens)}return tokens[0]}for(;i0){var macroEnd=expression.indexOf(")",macroStart);let params=expression.substring(macroStart+1,macroEnd).split(",").map(x=>x.trim());let value=tokenize(expression.substring(macroEnd+1).trim());defs[expression.substring(0,macroStart)]=(args=>{var ret="";value.forEach(x=>{var argIndex=params.indexOf(x);ret+=argIndex>=0?args[argIndex]:x});return ret})}else{let value=expandMacros(expression.substring(firstWs+1).trim(),0);defs[expression.substring(0,firstWs)]=(()=>value)}}break;case"undef":if(thisLineIsInActivePreprocessingBlock)delete defs[expression];break;default:if(directive!="version"&&directive!="pragma"&&directive!="extension"){}out+=expandMacros(code,lineStart,i)+"\n"}}return out}function remove_cpp_comments_in_shaders(code){var i=0,out="",ch,next,len=code.length;for(;i1,"GL_ES":()=>1,"__VERSION__":()=>source.includes("#version 300")?300:100});var regex=/layout\s*\(\s*location\s*=\s*(-?\d+)\s*\)\s*(uniform\s+((lowp|mediump|highp)\s+)?\w+\s+(\w+))/g,explicitUniformLocations={},match;while(match=regex.exec(source)){explicitUniformLocations[match[5]]=jstoi_q(match[1]);if(!(explicitUniformLocations[match[5]]>=0&&explicitUniformLocations[match[5]]<1048576)){err('Specified an out of range layout(location=x) directive "'+explicitUniformLocations[match[5]]+'"! ('+match[0]+")");GL.recordError(1281);return}}source=source.replace(regex,"$2");GL.shaders[shader].explicitUniformLocations=explicitUniformLocations;var bindingRegex=/layout\s*\(.*?binding\s*=\s*(-?\d+).*?\)\s*uniform\s+(\w+)\s+(\w+)?/g,samplerBindings={},uniformBindings={},bindingMatch;while(bindingMatch=bindingRegex.exec(source)){var arrayLength=1;for(var i=bindingMatch.index;i=0&&binding+arrayLength<=numBindingPoints)){err('Specified an out of range layout(binding=x) directive "'+binding+'"! ('+bindingMatch[0]+"). Valid range is [0, "+numBindingPoints+"-1]");GL.recordError(1281);return}}source=source.replace(/layout\s*\(.*?binding\s*=\s*([-\d]+).*?\)/g,"");source=source.replace(/(layout\s*\((.*?)),\s*binding\s*=\s*([-\d]+)\)/g,"$1)");source=source.replace(/layout\s*\(\s*binding\s*=\s*([-\d]+)\s*,(.*?)\)/g,"layout($2)");GL.shaders[shader].explicitSamplerBindings=samplerBindings;GL.shaders[shader].explicitUniformBindings=uniformBindings;GLctx.shaderSource(GL.shaders[shader],source)}function _glStencilFuncSeparate(x0,x1,x2,x3){GLctx["stencilFuncSeparate"](x0,x1,x2,x3)}function _glStencilMask(x0){GLctx["stencilMask"](x0)}function _glStencilOpSeparate(x0,x1,x2,x3){GLctx["stencilOpSeparate"](x0,x1,x2,x3)}function _glTexImage2D(target,level,internalFormat,width,height,border,format,type,pixels){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding){GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,null)}return}GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,pixels?emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,internalFormat):null)}function _glTexImage3D(target,level,internalFormat,width,height,depth,border,format,type,pixels){if(GLctx.currentPixelUnpackBufferBinding){GLctx["texImage3D"](target,level,internalFormat,width,height,depth,border,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx["texImage3D"](target,level,internalFormat,width,height,depth,border,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx["texImage3D"](target,level,internalFormat,width,height,depth,border,format,type,null)}}function _glTexParameterf(x0,x1,x2){GLctx["texParameterf"](x0,x1,x2)}function _glTexParameteri(x0,x1,x2){GLctx["texParameteri"](x0,x1,x2)}function _glTexParameteriv(target,pname,params){var param=HEAP32[params>>2];GLctx.texParameteri(target,pname,param)}function _glTexStorage2D(x0,x1,x2,x3,x4){GLctx["texStorage2D"](x0,x1,x2,x3,x4)}function _glTexStorage3D(x0,x1,x2,x3,x4,x5){GLctx["texStorage3D"](x0,x1,x2,x3,x4,x5)}function _glTexSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixels){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding){GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,null)}return}var pixelData=null;if(pixels)pixelData=emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,0);GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixelData)}function _glTexSubImage3D(target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,pixels){if(GLctx.currentPixelUnpackBufferBinding){GLctx["texSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx["texSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx["texSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,null)}}var miniTempWebGLFloatBuffers=[];function _glUniform1fv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform1fv(webglGetUniformLocation(location),HEAPF32,value>>2,count);return}if(count<=288){var view=miniTempWebGLFloatBuffers[count-1];for(var i=0;i>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*4>>2)}GLctx.uniform1fv(webglGetUniformLocation(location),view)}function _glUniform1i(location,v0){GLctx.uniform1i(webglGetUniformLocation(location),v0)}var __miniTempWebGLIntBuffers=[];function _glUniform1iv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform1iv(webglGetUniformLocation(location),HEAP32,value>>2,count);return}if(count<=288){var view=__miniTempWebGLIntBuffers[count-1];for(var i=0;i>2]}}else{var view=HEAP32.subarray(value>>2,value+count*4>>2)}GLctx.uniform1iv(webglGetUniformLocation(location),view)}function _glUniform1uiv(location,count,value){GLctx.uniform1uiv(webglGetUniformLocation(location),HEAPU32,value>>2,count)}function _glUniform2fv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform2fv(webglGetUniformLocation(location),HEAPF32,value>>2,count*2);return}if(count<=144){var view=miniTempWebGLFloatBuffers[2*count-1];for(var i=0;i<2*count;i+=2){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*8>>2)}GLctx.uniform2fv(webglGetUniformLocation(location),view)}function _glUniform2iv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform2iv(webglGetUniformLocation(location),HEAP32,value>>2,count*2);return}if(count<=144){var view=__miniTempWebGLIntBuffers[2*count-1];for(var i=0;i<2*count;i+=2){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*8>>2)}GLctx.uniform2iv(webglGetUniformLocation(location),view)}function _glUniform2uiv(location,count,value){GLctx.uniform2uiv(webglGetUniformLocation(location),HEAPU32,value>>2,count*2)}function _glUniform3fv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform3fv(webglGetUniformLocation(location),HEAPF32,value>>2,count*3);return}if(count<=96){var view=miniTempWebGLFloatBuffers[3*count-1];for(var i=0;i<3*count;i+=3){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*12>>2)}GLctx.uniform3fv(webglGetUniformLocation(location),view)}function _glUniform3iv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform3iv(webglGetUniformLocation(location),HEAP32,value>>2,count*3);return}if(count<=96){var view=__miniTempWebGLIntBuffers[3*count-1];for(var i=0;i<3*count;i+=3){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2];view[i+2]=HEAP32[value+(4*i+8)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*12>>2)}GLctx.uniform3iv(webglGetUniformLocation(location),view)}function _glUniform3uiv(location,count,value){GLctx.uniform3uiv(webglGetUniformLocation(location),HEAPU32,value>>2,count*3)}function _glUniform4fv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform4fv(webglGetUniformLocation(location),HEAPF32,value>>2,count*4);return}if(count<=72){var view=miniTempWebGLFloatBuffers[4*count-1];var heap=HEAPF32;value>>=2;for(var i=0;i<4*count;i+=4){var dst=value+i;view[i]=heap[dst];view[i+1]=heap[dst+1];view[i+2]=heap[dst+2];view[i+3]=heap[dst+3]}}else{var view=HEAPF32.subarray(value>>2,value+count*16>>2)}GLctx.uniform4fv(webglGetUniformLocation(location),view)}function _glUniform4iv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform4iv(webglGetUniformLocation(location),HEAP32,value>>2,count*4);return}if(count<=72){var view=__miniTempWebGLIntBuffers[4*count-1];for(var i=0;i<4*count;i+=4){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2];view[i+2]=HEAP32[value+(4*i+8)>>2];view[i+3]=HEAP32[value+(4*i+12)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*16>>2)}GLctx.uniform4iv(webglGetUniformLocation(location),view)}function _glUniform4uiv(location,count,value){GLctx.uniform4uiv(webglGetUniformLocation(location),HEAPU32,value>>2,count*4)}function _glUniformBlockBinding(program,uniformBlockIndex,uniformBlockBinding){program=GL.programs[program];GLctx["uniformBlockBinding"](program,uniformBlockIndex,uniformBlockBinding)}function _glUniformMatrix3fv(location,count,transpose,value){if(GL.currentContext.version>=2){GLctx.uniformMatrix3fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*9);return}if(count<=32){var view=miniTempWebGLFloatBuffers[9*count-1];for(var i=0;i<9*count;i+=9){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2];view[i+3]=HEAPF32[value+(4*i+12)>>2];view[i+4]=HEAPF32[value+(4*i+16)>>2];view[i+5]=HEAPF32[value+(4*i+20)>>2];view[i+6]=HEAPF32[value+(4*i+24)>>2];view[i+7]=HEAPF32[value+(4*i+28)>>2];view[i+8]=HEAPF32[value+(4*i+32)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*36>>2)}GLctx.uniformMatrix3fv(webglGetUniformLocation(location),!!transpose,view)}function _glUniformMatrix4fv(location,count,transpose,value){if(GL.currentContext.version>=2){GLctx.uniformMatrix4fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*16);return}if(count<=18){var view=miniTempWebGLFloatBuffers[16*count-1];var heap=HEAPF32;value>>=2;for(var i=0;i<16*count;i+=16){var dst=value+i;view[i]=heap[dst];view[i+1]=heap[dst+1];view[i+2]=heap[dst+2];view[i+3]=heap[dst+3];view[i+4]=heap[dst+4];view[i+5]=heap[dst+5];view[i+6]=heap[dst+6];view[i+7]=heap[dst+7];view[i+8]=heap[dst+8];view[i+9]=heap[dst+9];view[i+10]=heap[dst+10];view[i+11]=heap[dst+11];view[i+12]=heap[dst+12];view[i+13]=heap[dst+13];view[i+14]=heap[dst+14];view[i+15]=heap[dst+15]}}else{var view=HEAPF32.subarray(value>>2,value+count*64>>2)}GLctx.uniformMatrix4fv(webglGetUniformLocation(location),!!transpose,view)}function _glUnmapBuffer(target){if(!emscriptenWebGLValidateMapBufferTarget(target)){GL.recordError(1280);err("GL_INVALID_ENUM in glUnmapBuffer");return 0}var buffer=emscriptenWebGLGetBufferBinding(target);var mapping=GL.mappedBuffers[buffer];if(!mapping){GL.recordError(1282);err("buffer was never mapped in glUnmapBuffer");return 0}GL.mappedBuffers[buffer]=null;if(!(mapping.access&16))if(GL.currentContext.version>=2){GLctx.bufferSubData(target,mapping.offset,HEAPU8,mapping.mem,mapping.length)}else{GLctx.bufferSubData(target,mapping.offset,HEAPU8.subarray(mapping.mem,mapping.mem+mapping.length))}_free(mapping.mem);return 1}function webglApplyExplicitProgramBindings(){var p=GLctx.currentProgram;if(!p.explicitProgramBindingsApplied){if(GL.currentContext.version>=2){Object.keys(p.explicitUniformBindings).forEach(function(ubo){var bindings=p.explicitUniformBindings[ubo];for(var i=0;i1?"["+i+"]":""));GLctx.uniformBlockBinding(p,blockIndex,bindings[0]+i)}})}Object.keys(p.explicitSamplerBindings).forEach(function(sampler){var bindings=p.explicitSamplerBindings[sampler];for(var i=0;i>2],HEAPF32[v+4>>2],HEAPF32[v+8>>2],HEAPF32[v+12>>2])}function _glVertexAttribIPointer(index,size,type,stride,ptr){var cb=GL.currentContext.clientBuffers[index];if(!GLctx.currentArrayBufferBinding){cb.size=size;cb.type=type;cb.normalized=false;cb.stride=stride;cb.ptr=ptr;cb.clientside=true;cb.vertexAttribPointerAdaptor=function(index,size,type,normalized,stride,ptr){this.vertexAttribIPointer(index,size,type,stride,ptr)};return}cb.clientside=false;GLctx["vertexAttribIPointer"](index,size,type,stride,ptr)}function _glVertexAttribPointer(index,size,type,normalized,stride,ptr){var cb=GL.currentContext.clientBuffers[index];if(!GLctx.currentArrayBufferBinding){cb.size=size;cb.type=type;cb.normalized=normalized;cb.stride=stride;cb.ptr=ptr;cb.clientside=true;cb.vertexAttribPointerAdaptor=function(index,size,type,normalized,stride,ptr){this.vertexAttribPointer(index,size,type,normalized,stride,ptr)};return}cb.clientside=false;GLctx.vertexAttribPointer(index,size,type,!!normalized,stride,ptr)}function _glViewport(x0,x1,x2,x3){GLctx["viewport"](x0,x1,x2,x3)}function _llvm_eh_typeid_for(type){return type}function _setTempRet0(val){setTempRet0(val)}function __isLeapYear(year){return year%4===0&&(year%100!==0||year%400===0)}function __arraySum(array,index){var sum=0;for(var i=0;i<=index;sum+=array[i++]){}return sum}var __MONTH_DAYS_LEAP=[31,29,31,30,31,30,31,31,30,31,30,31];var __MONTH_DAYS_REGULAR=[31,28,31,30,31,30,31,31,30,31,30,31];function __addDays(date,days){var newDate=new Date(date.getTime());while(days>0){var leap=__isLeapYear(newDate.getFullYear());var currentMonth=newDate.getMonth();var daysInCurrentMonth=(leap?__MONTH_DAYS_LEAP:__MONTH_DAYS_REGULAR)[currentMonth];if(days>daysInCurrentMonth-newDate.getDate()){days-=daysInCurrentMonth-newDate.getDate()+1;newDate.setDate(1);if(currentMonth<11){newDate.setMonth(currentMonth+1)}else{newDate.setMonth(0);newDate.setFullYear(newDate.getFullYear()+1)}}else{newDate.setDate(newDate.getDate()+days);return newDate}}return newDate}function _strftime(s,maxsize,format,tm){var tm_zone=HEAP32[tm+40>>2];var date={tm_sec:HEAP32[tm>>2],tm_min:HEAP32[tm+4>>2],tm_hour:HEAP32[tm+8>>2],tm_mday:HEAP32[tm+12>>2],tm_mon:HEAP32[tm+16>>2],tm_year:HEAP32[tm+20>>2],tm_wday:HEAP32[tm+24>>2],tm_yday:HEAP32[tm+28>>2],tm_isdst:HEAP32[tm+32>>2],tm_gmtoff:HEAP32[tm+36>>2],tm_zone:tm_zone?UTF8ToString(tm_zone):""};var pattern=UTF8ToString(format);var EXPANSION_RULES_1={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"};for(var rule in EXPANSION_RULES_1){pattern=pattern.replace(new RegExp(rule,"g"),EXPANSION_RULES_1[rule])}var WEEKDAYS=["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"];var MONTHS=["January","February","March","April","May","June","July","August","September","October","November","December"];function leadingSomething(value,digits,character){var str=typeof value=="number"?value.toString():value||"";while(str.length0?1:0}var compare;if((compare=sgn(date1.getFullYear()-date2.getFullYear()))===0){if((compare=sgn(date1.getMonth()-date2.getMonth()))===0){compare=sgn(date1.getDate()-date2.getDate())}}return compare}function getFirstWeekStartDate(janFourth){switch(janFourth.getDay()){case 0:return new Date(janFourth.getFullYear()-1,11,29);case 1:return janFourth;case 2:return new Date(janFourth.getFullYear(),0,3);case 3:return new Date(janFourth.getFullYear(),0,2);case 4:return new Date(janFourth.getFullYear(),0,1);case 5:return new Date(janFourth.getFullYear()-1,11,31);case 6:return new Date(janFourth.getFullYear()-1,11,30)}}function getWeekBasedYear(date){var thisDate=__addDays(new Date(date.tm_year+1900,0,1),date.tm_yday);var janFourthThisYear=new Date(thisDate.getFullYear(),0,4);var janFourthNextYear=new Date(thisDate.getFullYear()+1,0,4);var firstWeekStartThisYear=getFirstWeekStartDate(janFourthThisYear);var firstWeekStartNextYear=getFirstWeekStartDate(janFourthNextYear);if(compareByDay(firstWeekStartThisYear,thisDate)<=0){if(compareByDay(firstWeekStartNextYear,thisDate)<=0){return thisDate.getFullYear()+1}else{return thisDate.getFullYear()}}else{return thisDate.getFullYear()-1}}var EXPANSION_RULES_2={"%a":function(date){return WEEKDAYS[date.tm_wday].substring(0,3)},"%A":function(date){return WEEKDAYS[date.tm_wday]},"%b":function(date){return MONTHS[date.tm_mon].substring(0,3)},"%B":function(date){return MONTHS[date.tm_mon]},"%C":function(date){var year=date.tm_year+1900;return leadingNulls(year/100|0,2)},"%d":function(date){return leadingNulls(date.tm_mday,2)},"%e":function(date){return leadingSomething(date.tm_mday,2," ")},"%g":function(date){return getWeekBasedYear(date).toString().substring(2)},"%G":function(date){return getWeekBasedYear(date)},"%H":function(date){return leadingNulls(date.tm_hour,2)},"%I":function(date){var twelveHour=date.tm_hour;if(twelveHour==0)twelveHour=12;else if(twelveHour>12)twelveHour-=12;return leadingNulls(twelveHour,2)},"%j":function(date){return leadingNulls(date.tm_mday+__arraySum(__isLeapYear(date.tm_year+1900)?__MONTH_DAYS_LEAP:__MONTH_DAYS_REGULAR,date.tm_mon-1),3)},"%m":function(date){return leadingNulls(date.tm_mon+1,2)},"%M":function(date){return leadingNulls(date.tm_min,2)},"%n":function(){return"\n"},"%p":function(date){if(date.tm_hour>=0&&date.tm_hour<12){return"AM"}else{return"PM"}},"%S":function(date){return leadingNulls(date.tm_sec,2)},"%t":function(){return"\t"},"%u":function(date){return date.tm_wday||7},"%U":function(date){var days=date.tm_yday+7-date.tm_wday;return leadingNulls(Math.floor(days/7),2)},"%V":function(date){var val=Math.floor((date.tm_yday+7-(date.tm_wday+6)%7)/7);if((date.tm_wday+371-date.tm_yday-2)%7<=2){val++}if(!val){val=52;var dec31=(date.tm_wday+7-date.tm_yday-1)%7;if(dec31==4||dec31==5&&__isLeapYear(date.tm_year%400-1)){val++}}else if(val==53){var jan1=(date.tm_wday+371-date.tm_yday)%7;if(jan1!=4&&(jan1!=3||!__isLeapYear(date.tm_year)))val=1}return leadingNulls(val,2)},"%w":function(date){return date.tm_wday},"%W":function(date){var days=date.tm_yday+7-(date.tm_wday+6)%7;return leadingNulls(Math.floor(days/7),2)},"%y":function(date){return(date.tm_year+1900).toString().substring(2)},"%Y":function(date){return date.tm_year+1900},"%z":function(date){var off=date.tm_gmtoff;var ahead=off>=0;off=Math.abs(off)/60;off=off/60*100+off%60;return(ahead?"+":"-")+String("0000"+off).slice(-4)},"%Z":function(date){return date.tm_zone},"%%":function(){return"%"}};pattern=pattern.replace(/%%/g,"\0\0");for(var rule in EXPANSION_RULES_2){if(pattern.includes(rule)){pattern=pattern.replace(new RegExp(rule,"g"),EXPANSION_RULES_2[rule](date))}}pattern=pattern.replace(/\0\0/g,"%");var bytes=intArrayFromString(pattern,false);if(bytes.length>maxsize){return 0}writeArrayToMemory(bytes,s);return bytes.length-1}var FSNode=function(parent,name,mode,rdev){if(!parent){parent=this}this.parent=parent;this.mount=parent.mount;this.mounted=null;this.id=FS.nextInode++;this.name=name;this.mode=mode;this.node_ops={};this.stream_ops={};this.rdev=rdev};var readMode=292|73;var writeMode=146;Object.defineProperties(FSNode.prototype,{read:{get:function(){return(this.mode&readMode)===readMode},set:function(val){val?this.mode|=readMode:this.mode&=~readMode}},write:{get:function(){return(this.mode&writeMode)===writeMode},set:function(val){val?this.mode|=writeMode:this.mode&=~writeMode}},isFolder:{get:function(){return FS.isDir(this.mode)}},isDevice:{get:function(){return FS.isChrdev(this.mode)}}});FS.FSNode=FSNode;FS.staticInit();Module["FS_createPath"]=FS.createPath;Module["FS_createDataFile"]=FS.createDataFile;Module["requestFullscreen"]=function Module_requestFullscreen(lockPointer,resizeCanvas){Browser.requestFullscreen(lockPointer,resizeCanvas)};Module["requestAnimationFrame"]=function Module_requestAnimationFrame(func){Browser.requestAnimationFrame(func)};Module["setCanvasSize"]=function Module_setCanvasSize(width,height,noUpdates){Browser.setCanvasSize(width,height,noUpdates)};Module["pauseMainLoop"]=function Module_pauseMainLoop(){Browser.mainLoop.pause()};Module["resumeMainLoop"]=function Module_resumeMainLoop(){Browser.mainLoop.resume()};Module["getUserMedia"]=function Module_getUserMedia(){Browser.getUserMedia()};Module["createContext"]=function Module_createContext(canvas,useWebGL,setInModule,webGLContextAttributes){return Browser.createContext(canvas,useWebGL,setInModule,webGLContextAttributes)};var GLctx;for(var i=0;i<32;++i)tempFixedLengthArray.push(new Array(i));var miniTempWebGLFloatBuffersStorage=new Float32Array(288);for(var i=0;i<288;++i){miniTempWebGLFloatBuffers[i]=miniTempWebGLFloatBuffersStorage.subarray(0,i+1)}var __miniTempWebGLIntBuffersStorage=new Int32Array(288);for(var i=0;i<288;++i){__miniTempWebGLIntBuffers[i]=__miniTempWebGLIntBuffersStorage.subarray(0,i+1)}var ASSERTIONS=false;function intArrayFromString(stringy,dontAddNull,length){var len=length>0?length:lengthBytesUTF8(stringy)+1;var u8array=new Array(len);var numBytesWritten=stringToUTF8Array(stringy,u8array,0,u8array.length);if(dontAddNull)u8array.length=numBytesWritten;return u8array}var asmLibraryArg={"GetJSMemoryInfo":_GetJSMemoryInfo,"JS_Accelerometer_IsRunning":_JS_Accelerometer_IsRunning,"JS_Accelerometer_Start":_JS_Accelerometer_Start,"JS_Accelerometer_Stop":_JS_Accelerometer_Stop,"JS_Cursor_SetImage":_JS_Cursor_SetImage,"JS_Cursor_SetShow":_JS_Cursor_SetShow,"JS_DOM_MapViewportCoordinateToElementLocalCoordinate":_JS_DOM_MapViewportCoordinateToElementLocalCoordinate,"JS_DOM_UnityCanvasSelector":_JS_DOM_UnityCanvasSelector,"JS_Eval_OpenURL":_JS_Eval_OpenURL,"JS_FileSystem_Initialize":_JS_FileSystem_Initialize,"JS_FileSystem_Sync":_JS_FileSystem_Sync,"JS_GravitySensor_IsRunning":_JS_GravitySensor_IsRunning,"JS_GravitySensor_Start":_JS_GravitySensor_Start,"JS_GravitySensor_Stop":_JS_GravitySensor_Stop,"JS_GuardAgainstJsExceptions":_JS_GuardAgainstJsExceptions,"JS_Gyroscope_IsRunning":_JS_Gyroscope_IsRunning,"JS_Gyroscope_Start":_JS_Gyroscope_Start,"JS_Gyroscope_Stop":_JS_Gyroscope_Stop,"JS_LinearAccelerationSensor_IsRunning":_JS_LinearAccelerationSensor_IsRunning,"JS_LinearAccelerationSensor_Start":_JS_LinearAccelerationSensor_Start,"JS_LinearAccelerationSensor_Stop":_JS_LinearAccelerationSensor_Stop,"JS_Log_Dump":_JS_Log_Dump,"JS_Log_StackTrace":_JS_Log_StackTrace,"JS_MobileKeybard_GetIgnoreBlurEvent":_JS_MobileKeybard_GetIgnoreBlurEvent,"JS_MobileKeyboard_GetKeyboardStatus":_JS_MobileKeyboard_GetKeyboardStatus,"JS_MobileKeyboard_GetText":_JS_MobileKeyboard_GetText,"JS_MobileKeyboard_GetTextSelection":_JS_MobileKeyboard_GetTextSelection,"JS_MobileKeyboard_Hide":_JS_MobileKeyboard_Hide,"JS_MobileKeyboard_SetCharacterLimit":_JS_MobileKeyboard_SetCharacterLimit,"JS_MobileKeyboard_SetText":_JS_MobileKeyboard_SetText,"JS_MobileKeyboard_SetTextSelection":_JS_MobileKeyboard_SetTextSelection,"JS_MobileKeyboard_Show":_JS_MobileKeyboard_Show,"JS_OrientationSensor_IsRunning":_JS_OrientationSensor_IsRunning,"JS_OrientationSensor_Start":_JS_OrientationSensor_Start,"JS_OrientationSensor_Stop":_JS_OrientationSensor_Stop,"JS_RequestDeviceSensorPermissionsOnTouch":_JS_RequestDeviceSensorPermissionsOnTouch,"JS_RunQuitCallbacks":_JS_RunQuitCallbacks,"JS_ScreenOrientation_DeInit":_JS_ScreenOrientation_DeInit,"JS_ScreenOrientation_Init":_JS_ScreenOrientation_Init,"JS_ScreenOrientation_Lock":_JS_ScreenOrientation_Lock,"JS_Sound_Create_Channel":_JS_Sound_Create_Channel,"JS_Sound_GetLength":_JS_Sound_GetLength,"JS_Sound_GetLoadState":_JS_Sound_GetLoadState,"JS_Sound_Init":_JS_Sound_Init,"JS_Sound_Load":_JS_Sound_Load,"JS_Sound_Load_PCM":_JS_Sound_Load_PCM,"JS_Sound_Play":_JS_Sound_Play,"JS_Sound_ReleaseInstance":_JS_Sound_ReleaseInstance,"JS_Sound_ResumeIfNeeded":_JS_Sound_ResumeIfNeeded,"JS_Sound_Set3D":_JS_Sound_Set3D,"JS_Sound_SetListenerOrientation":_JS_Sound_SetListenerOrientation,"JS_Sound_SetListenerPosition":_JS_Sound_SetListenerPosition,"JS_Sound_SetLoop":_JS_Sound_SetLoop,"JS_Sound_SetLoopPoints":_JS_Sound_SetLoopPoints,"JS_Sound_SetPaused":_JS_Sound_SetPaused,"JS_Sound_SetPitch":_JS_Sound_SetPitch,"JS_Sound_SetPosition":_JS_Sound_SetPosition,"JS_Sound_SetVolume":_JS_Sound_SetVolume,"JS_Sound_Stop":_JS_Sound_Stop,"JS_SystemInfo_GetBrowserName":_JS_SystemInfo_GetBrowserName,"JS_SystemInfo_GetBrowserVersionString":_JS_SystemInfo_GetBrowserVersionString,"JS_SystemInfo_GetCanvasClientSize":_JS_SystemInfo_GetCanvasClientSize,"JS_SystemInfo_GetDocumentURL":_JS_SystemInfo_GetDocumentURL,"JS_SystemInfo_GetGPUInfo":_JS_SystemInfo_GetGPUInfo,"JS_SystemInfo_GetLanguage":_JS_SystemInfo_GetLanguage,"JS_SystemInfo_GetMatchWebGLToCanvasSize":_JS_SystemInfo_GetMatchWebGLToCanvasSize,"JS_SystemInfo_GetMemory":_JS_SystemInfo_GetMemory,"JS_SystemInfo_GetOS":_JS_SystemInfo_GetOS,"JS_SystemInfo_GetPreferredDevicePixelRatio":_JS_SystemInfo_GetPreferredDevicePixelRatio,"JS_SystemInfo_GetScreenSize":_JS_SystemInfo_GetScreenSize,"JS_SystemInfo_GetStreamingAssetsURL":_JS_SystemInfo_GetStreamingAssetsURL,"JS_SystemInfo_HasAstcHdr":_JS_SystemInfo_HasAstcHdr,"JS_SystemInfo_HasCursorLock":_JS_SystemInfo_HasCursorLock,"JS_SystemInfo_HasFullscreen":_JS_SystemInfo_HasFullscreen,"JS_SystemInfo_HasWebGL":_JS_SystemInfo_HasWebGL,"JS_UnityEngineShouldQuit":_JS_UnityEngineShouldQuit,"JS_WebRequest_Abort":_JS_WebRequest_Abort,"JS_WebRequest_Create":_JS_WebRequest_Create,"JS_WebRequest_GetResponseMetaData":_JS_WebRequest_GetResponseMetaData,"JS_WebRequest_GetResponseMetaDataLengths":_JS_WebRequest_GetResponseMetaDataLengths,"JS_WebRequest_Release":_JS_WebRequest_Release,"JS_WebRequest_Send":_JS_WebRequest_Send,"JS_WebRequest_SetRedirectLimit":_JS_WebRequest_SetRedirectLimit,"JS_WebRequest_SetRequestHeader":_JS_WebRequest_SetRequestHeader,"JS_WebRequest_SetTimeout":_JS_WebRequest_SetTimeout,"__cxa_allocate_exception":___cxa_allocate_exception,"__cxa_begin_catch":___cxa_begin_catch,"__cxa_end_catch":___cxa_end_catch,"__cxa_find_matching_catch_2":___cxa_find_matching_catch_2,"__cxa_find_matching_catch_3":___cxa_find_matching_catch_3,"__cxa_find_matching_catch_4":___cxa_find_matching_catch_4,"__cxa_free_exception":___cxa_free_exception,"__cxa_rethrow":___cxa_rethrow,"__cxa_throw":___cxa_throw,"__resumeException":___resumeException,"__syscall__newselect":___syscall__newselect,"__syscall_accept4":___syscall_accept4,"__syscall_bind":___syscall_bind,"__syscall_chmod":___syscall_chmod,"__syscall_connect":___syscall_connect,"__syscall_dup3":___syscall_dup3,"__syscall_faccessat":___syscall_faccessat,"__syscall_fcntl64":___syscall_fcntl64,"__syscall_fstat64":___syscall_fstat64,"__syscall_ftruncate64":___syscall_ftruncate64,"__syscall_getcwd":___syscall_getcwd,"__syscall_getdents64":___syscall_getdents64,"__syscall_getpeername":___syscall_getpeername,"__syscall_getsockname":___syscall_getsockname,"__syscall_getsockopt":___syscall_getsockopt,"__syscall_ioctl":___syscall_ioctl,"__syscall_listen":___syscall_listen,"__syscall_lstat64":___syscall_lstat64,"__syscall_mkdir":___syscall_mkdir,"__syscall_newfstatat":___syscall_newfstatat,"__syscall_openat":___syscall_openat,"__syscall_pipe":___syscall_pipe,"__syscall_poll":___syscall_poll,"__syscall_readlinkat":___syscall_readlinkat,"__syscall_recvfrom":___syscall_recvfrom,"__syscall_recvmsg":___syscall_recvmsg,"__syscall_renameat":___syscall_renameat,"__syscall_rmdir":___syscall_rmdir,"__syscall_sendmsg":___syscall_sendmsg,"__syscall_sendto":___syscall_sendto,"__syscall_socket":___syscall_socket,"__syscall_stat64":___syscall_stat64,"__syscall_statfs64":___syscall_statfs64,"__syscall_truncate64":___syscall_truncate64,"__syscall_unlinkat":___syscall_unlinkat,"__syscall_utimensat":___syscall_utimensat,"_dlopen_js":__dlopen_js,"_dlsym_js":__dlsym_js,"_emscripten_date_now":__emscripten_date_now,"_emscripten_get_now_is_monotonic":__emscripten_get_now_is_monotonic,"_emscripten_throw_longjmp":__emscripten_throw_longjmp,"_gmtime_js":__gmtime_js,"_localtime_js":__localtime_js,"_mktime_js":__mktime_js,"_mmap_js":__mmap_js,"_munmap_js":__munmap_js,"_tzset_js":__tzset_js,"abort":_abort,"emscripten_asm_const_int_sync_on_main_thread":_emscripten_asm_const_int_sync_on_main_thread,"emscripten_cancel_main_loop":_emscripten_cancel_main_loop,"emscripten_clear_interval":_emscripten_clear_interval,"emscripten_exit_fullscreen":_emscripten_exit_fullscreen,"emscripten_exit_pointerlock":_emscripten_exit_pointerlock,"emscripten_get_canvas_element_size":_emscripten_get_canvas_element_size,"emscripten_get_fullscreen_status":_emscripten_get_fullscreen_status,"emscripten_get_gamepad_status":_emscripten_get_gamepad_status,"emscripten_get_heap_max":_emscripten_get_heap_max,"emscripten_get_now":_emscripten_get_now,"emscripten_get_now_res":_emscripten_get_now_res,"emscripten_get_num_gamepads":_emscripten_get_num_gamepads,"emscripten_html5_remove_all_event_listeners":_emscripten_html5_remove_all_event_listeners,"emscripten_is_webgl_context_lost":_emscripten_is_webgl_context_lost,"emscripten_log":_emscripten_log,"emscripten_memcpy_big":_emscripten_memcpy_big,"emscripten_request_fullscreen":_emscripten_request_fullscreen,"emscripten_request_pointerlock":_emscripten_request_pointerlock,"emscripten_resize_heap":_emscripten_resize_heap,"emscripten_sample_gamepad_data":_emscripten_sample_gamepad_data,"emscripten_set_blur_callback_on_thread":_emscripten_set_blur_callback_on_thread,"emscripten_set_canvas_element_size":_emscripten_set_canvas_element_size,"emscripten_set_focus_callback_on_thread":_emscripten_set_focus_callback_on_thread,"emscripten_set_fullscreenchange_callback_on_thread":_emscripten_set_fullscreenchange_callback_on_thread,"emscripten_set_gamepadconnected_callback_on_thread":_emscripten_set_gamepadconnected_callback_on_thread,"emscripten_set_gamepaddisconnected_callback_on_thread":_emscripten_set_gamepaddisconnected_callback_on_thread,"emscripten_set_interval":_emscripten_set_interval,"emscripten_set_keydown_callback_on_thread":_emscripten_set_keydown_callback_on_thread,"emscripten_set_keypress_callback_on_thread":_emscripten_set_keypress_callback_on_thread,"emscripten_set_keyup_callback_on_thread":_emscripten_set_keyup_callback_on_thread,"emscripten_set_main_loop":_emscripten_set_main_loop,"emscripten_set_main_loop_timing":_emscripten_set_main_loop_timing,"emscripten_set_mousedown_callback_on_thread":_emscripten_set_mousedown_callback_on_thread,"emscripten_set_mousemove_callback_on_thread":_emscripten_set_mousemove_callback_on_thread,"emscripten_set_mouseup_callback_on_thread":_emscripten_set_mouseup_callback_on_thread,"emscripten_set_touchcancel_callback_on_thread":_emscripten_set_touchcancel_callback_on_thread,"emscripten_set_touchend_callback_on_thread":_emscripten_set_touchend_callback_on_thread,"emscripten_set_touchmove_callback_on_thread":_emscripten_set_touchmove_callback_on_thread,"emscripten_set_touchstart_callback_on_thread":_emscripten_set_touchstart_callback_on_thread,"emscripten_set_wheel_callback_on_thread":_emscripten_set_wheel_callback_on_thread,"emscripten_webgl_create_context":_emscripten_webgl_create_context,"emscripten_webgl_destroy_context":_emscripten_webgl_destroy_context,"emscripten_webgl_enable_extension":_emscripten_webgl_enable_extension,"emscripten_webgl_get_current_context":_emscripten_webgl_get_current_context,"emscripten_webgl_init_context_attributes":_emscripten_webgl_init_context_attributes,"emscripten_webgl_make_context_current":_emscripten_webgl_make_context_current,"environ_get":_environ_get,"environ_sizes_get":_environ_sizes_get,"exit":_exit,"fd_close":_fd_close,"fd_fdstat_get":_fd_fdstat_get,"fd_read":_fd_read,"fd_seek":_fd_seek,"fd_write":_fd_write,"getTempRet0":_getTempRet0,"getaddrinfo":_getaddrinfo,"gethostbyaddr":_gethostbyaddr,"gethostbyname":_gethostbyname,"getnameinfo":_getnameinfo,"glActiveTexture":_glActiveTexture,"glAttachShader":_glAttachShader,"glBeginQuery":_glBeginQuery,"glBindAttribLocation":_glBindAttribLocation,"glBindBuffer":_glBindBuffer,"glBindBufferBase":_glBindBufferBase,"glBindBufferRange":_glBindBufferRange,"glBindFramebuffer":_glBindFramebuffer,"glBindRenderbuffer":_glBindRenderbuffer,"glBindSampler":_glBindSampler,"glBindTexture":_glBindTexture,"glBindVertexArray":_glBindVertexArray,"glBlendEquation":_glBlendEquation,"glBlendEquationSeparate":_glBlendEquationSeparate,"glBlendFuncSeparate":_glBlendFuncSeparate,"glBlitFramebuffer":_glBlitFramebuffer,"glBufferData":_glBufferData,"glBufferSubData":_glBufferSubData,"glCheckFramebufferStatus":_glCheckFramebufferStatus,"glClear":_glClear,"glClearBufferfi":_glClearBufferfi,"glClearBufferfv":_glClearBufferfv,"glClearBufferuiv":_glClearBufferuiv,"glClearColor":_glClearColor,"glClearDepthf":_glClearDepthf,"glClearStencil":_glClearStencil,"glClientWaitSync":_glClientWaitSync,"glColorMask":_glColorMask,"glCompileShader":_glCompileShader,"glCompressedTexImage2D":_glCompressedTexImage2D,"glCompressedTexImage3D":_glCompressedTexImage3D,"glCompressedTexSubImage2D":_glCompressedTexSubImage2D,"glCompressedTexSubImage3D":_glCompressedTexSubImage3D,"glCopyBufferSubData":_glCopyBufferSubData,"glCopyTexImage2D":_glCopyTexImage2D,"glCopyTexSubImage2D":_glCopyTexSubImage2D,"glCreateProgram":_glCreateProgram,"glCreateShader":_glCreateShader,"glCullFace":_glCullFace,"glDeleteBuffers":_glDeleteBuffers,"glDeleteFramebuffers":_glDeleteFramebuffers,"glDeleteProgram":_glDeleteProgram,"glDeleteQueries":_glDeleteQueries,"glDeleteRenderbuffers":_glDeleteRenderbuffers,"glDeleteSamplers":_glDeleteSamplers,"glDeleteShader":_glDeleteShader,"glDeleteSync":_glDeleteSync,"glDeleteTextures":_glDeleteTextures,"glDeleteVertexArrays":_glDeleteVertexArrays,"glDepthFunc":_glDepthFunc,"glDepthMask":_glDepthMask,"glDetachShader":_glDetachShader,"glDisable":_glDisable,"glDisableVertexAttribArray":_glDisableVertexAttribArray,"glDrawArrays":_glDrawArrays,"glDrawArraysInstanced":_glDrawArraysInstanced,"glDrawBuffers":_glDrawBuffers,"glDrawElements":_glDrawElements,"glDrawElementsInstanced":_glDrawElementsInstanced,"glEnable":_glEnable,"glEnableVertexAttribArray":_glEnableVertexAttribArray,"glEndQuery":_glEndQuery,"glFenceSync":_glFenceSync,"glFinish":_glFinish,"glFlush":_glFlush,"glFlushMappedBufferRange":_glFlushMappedBufferRange,"glFramebufferRenderbuffer":_glFramebufferRenderbuffer,"glFramebufferTexture2D":_glFramebufferTexture2D,"glFramebufferTextureLayer":_glFramebufferTextureLayer,"glFrontFace":_glFrontFace,"glGenBuffers":_glGenBuffers,"glGenFramebuffers":_glGenFramebuffers,"glGenQueries":_glGenQueries,"glGenRenderbuffers":_glGenRenderbuffers,"glGenSamplers":_glGenSamplers,"glGenTextures":_glGenTextures,"glGenVertexArrays":_glGenVertexArrays,"glGenerateMipmap":_glGenerateMipmap,"glGetActiveAttrib":_glGetActiveAttrib,"glGetActiveUniform":_glGetActiveUniform,"glGetActiveUniformBlockName":_glGetActiveUniformBlockName,"glGetActiveUniformBlockiv":_glGetActiveUniformBlockiv,"glGetActiveUniformsiv":_glGetActiveUniformsiv,"glGetAttribLocation":_glGetAttribLocation,"glGetBufferSubData":_glGetBufferSubData,"glGetError":_glGetError,"glGetFramebufferAttachmentParameteriv":_glGetFramebufferAttachmentParameteriv,"glGetIntegeri_v":_glGetIntegeri_v,"glGetIntegerv":_glGetIntegerv,"glGetInternalformativ":_glGetInternalformativ,"glGetProgramBinary":_glGetProgramBinary,"glGetProgramInfoLog":_glGetProgramInfoLog,"glGetProgramiv":_glGetProgramiv,"glGetQueryObjectuiv":_glGetQueryObjectuiv,"glGetQueryiv":_glGetQueryiv,"glGetRenderbufferParameteriv":_glGetRenderbufferParameteriv,"glGetShaderInfoLog":_glGetShaderInfoLog,"glGetShaderPrecisionFormat":_glGetShaderPrecisionFormat,"glGetShaderSource":_glGetShaderSource,"glGetShaderiv":_glGetShaderiv,"glGetString":_glGetString,"glGetStringi":_glGetStringi,"glGetTexParameteriv":_glGetTexParameteriv,"glGetUniformBlockIndex":_glGetUniformBlockIndex,"glGetUniformIndices":_glGetUniformIndices,"glGetUniformLocation":_glGetUniformLocation,"glGetUniformiv":_glGetUniformiv,"glGetVertexAttribiv":_glGetVertexAttribiv,"glInvalidateFramebuffer":_glInvalidateFramebuffer,"glIsEnabled":_glIsEnabled,"glIsVertexArray":_glIsVertexArray,"glLinkProgram":_glLinkProgram,"glMapBufferRange":_glMapBufferRange,"glPixelStorei":_glPixelStorei,"glPolygonOffset":_glPolygonOffset,"glProgramBinary":_glProgramBinary,"glProgramParameteri":_glProgramParameteri,"glReadBuffer":_glReadBuffer,"glReadPixels":_glReadPixels,"glRenderbufferStorage":_glRenderbufferStorage,"glRenderbufferStorageMultisample":_glRenderbufferStorageMultisample,"glSamplerParameteri":_glSamplerParameteri,"glScissor":_glScissor,"glShaderSource":_glShaderSource,"glStencilFuncSeparate":_glStencilFuncSeparate,"glStencilMask":_glStencilMask,"glStencilOpSeparate":_glStencilOpSeparate,"glTexImage2D":_glTexImage2D,"glTexImage3D":_glTexImage3D,"glTexParameterf":_glTexParameterf,"glTexParameteri":_glTexParameteri,"glTexParameteriv":_glTexParameteriv,"glTexStorage2D":_glTexStorage2D,"glTexStorage3D":_glTexStorage3D,"glTexSubImage2D":_glTexSubImage2D,"glTexSubImage3D":_glTexSubImage3D,"glUniform1fv":_glUniform1fv,"glUniform1i":_glUniform1i,"glUniform1iv":_glUniform1iv,"glUniform1uiv":_glUniform1uiv,"glUniform2fv":_glUniform2fv,"glUniform2iv":_glUniform2iv,"glUniform2uiv":_glUniform2uiv,"glUniform3fv":_glUniform3fv,"glUniform3iv":_glUniform3iv,"glUniform3uiv":_glUniform3uiv,"glUniform4fv":_glUniform4fv,"glUniform4iv":_glUniform4iv,"glUniform4uiv":_glUniform4uiv,"glUniformBlockBinding":_glUniformBlockBinding,"glUniformMatrix3fv":_glUniformMatrix3fv,"glUniformMatrix4fv":_glUniformMatrix4fv,"glUnmapBuffer":_glUnmapBuffer,"glUseProgram":_glUseProgram,"glValidateProgram":_glValidateProgram,"glVertexAttrib4f":_glVertexAttrib4f,"glVertexAttrib4fv":_glVertexAttrib4fv,"glVertexAttribIPointer":_glVertexAttribIPointer,"glVertexAttribPointer":_glVertexAttribPointer,"glViewport":_glViewport,"invoke_dddi":invoke_dddi,"invoke_ddiii":invoke_ddiii,"invoke_dii":invoke_dii,"invoke_diii":invoke_diii,"invoke_diiii":invoke_diiii,"invoke_dji":invoke_dji,"invoke_fffi":invoke_fffi,"invoke_fi":invoke_fi,"invoke_fii":invoke_fii,"invoke_fiii":invoke_fiii,"invoke_i":invoke_i,"invoke_idi":invoke_idi,"invoke_ifi":invoke_ifi,"invoke_ii":invoke_ii,"invoke_iidi":invoke_iidi,"invoke_iifi":invoke_iifi,"invoke_iii":invoke_iii,"invoke_iiifi":invoke_iiifi,"invoke_iiifii":invoke_iiifii,"invoke_iiii":invoke_iiii,"invoke_iiiidii":invoke_iiiidii,"invoke_iiiifii":invoke_iiiifii,"invoke_iiiii":invoke_iiiii,"invoke_iiiiii":invoke_iiiiii,"invoke_iiiiiii":invoke_iiiiiii,"invoke_iiiiiiii":invoke_iiiiiiii,"invoke_iiiiiiiii":invoke_iiiiiiiii,"invoke_iiiiiiiiifi":invoke_iiiiiiiiifi,"invoke_iiiiiiiiii":invoke_iiiiiiiiii,"invoke_iiiiiiiiiii":invoke_iiiiiiiiiii,"invoke_iiiiiiiiiiii":invoke_iiiiiiiiiiii,"invoke_iiiiiiiiiji":invoke_iiiiiiiiiji,"invoke_iiiiij":invoke_iiiiij,"invoke_iiiijii":invoke_iiiijii,"invoke_iiiijjii":invoke_iiiijjii,"invoke_iiijiii":invoke_iiijiii,"invoke_iij":invoke_iij,"invoke_iiji":invoke_iiji,"invoke_iijii":invoke_iijii,"invoke_iijiii":invoke_iijiii,"invoke_iijiiiiii":invoke_iijiiiiii,"invoke_iijji":invoke_iijji,"invoke_iijjiiiiii":invoke_iijjiiiiii,"invoke_iji":invoke_iji,"invoke_ijji":invoke_ijji,"invoke_j":invoke_j,"invoke_jdi":invoke_jdi,"invoke_ji":invoke_ji,"invoke_jidi":invoke_jidi,"invoke_jii":invoke_jii,"invoke_jiidi":invoke_jiidi,"invoke_jiii":invoke_jiii,"invoke_jiiii":invoke_jiiii,"invoke_jiiiii":invoke_jiiiii,"invoke_jiiiiiiiiii":invoke_jiiiiiiiiii,"invoke_jiji":invoke_jiji,"invoke_jijii":invoke_jijii,"invoke_jji":invoke_jji,"invoke_jjii":invoke_jjii,"invoke_jjji":invoke_jjji,"invoke_v":invoke_v,"invoke_vi":invoke_vi,"invoke_vidd":invoke_vidd,"invoke_vidi":invoke_vidi,"invoke_viffi":invoke_viffi,"invoke_vifi":invoke_vifi,"invoke_vifii":invoke_vifii,"invoke_vii":invoke_vii,"invoke_viidi":invoke_viidi,"invoke_viiffi":invoke_viiffi,"invoke_viifi":invoke_viifi,"invoke_viifii":invoke_viifii,"invoke_viii":invoke_viii,"invoke_viiifi":invoke_viiifi,"invoke_viiii":invoke_viiii,"invoke_viiiifi":invoke_viiiifi,"invoke_viiiii":invoke_viiiii,"invoke_viiiiii":invoke_viiiiii,"invoke_viiiiiii":invoke_viiiiiii,"invoke_viiiiiiii":invoke_viiiiiiii,"invoke_viiiiiiiii":invoke_viiiiiiiii,"invoke_viiiiiiiiii":invoke_viiiiiiiiii,"invoke_viiiiiiiiiiii":invoke_viiiiiiiiiiii,"invoke_viiiiiiiiiiiii":invoke_viiiiiiiiiiiii,"invoke_viiiiiiiiiiiiiii":invoke_viiiiiiiiiiiiiii,"invoke_viiiji":invoke_viiiji,"invoke_viiji":invoke_viiji,"invoke_viijii":invoke_viijii,"invoke_viijiiijiiii":invoke_viijiiijiiii,"invoke_viji":invoke_viji,"invoke_vijii":invoke_vijii,"invoke_vijiii":invoke_vijiii,"invoke_vijjji":invoke_vijjji,"invoke_vji":invoke_vji,"invoke_vjiiiii":invoke_vjiiiii,"invoke_vjjjiiii":invoke_vjjjiiii,"llvm_eh_typeid_for":_llvm_eh_typeid_for,"setTempRet0":_setTempRet0,"strftime":_strftime};var asm=createWasm();var ___wasm_call_ctors=Module["___wasm_call_ctors"]=function(){return(___wasm_call_ctors=Module["___wasm_call_ctors"]=Module["asm"]["__wasm_call_ctors"]).apply(null,arguments)};var _getMemInfo=Module["_getMemInfo"]=function(){return(_getMemInfo=Module["_getMemInfo"]=Module["asm"]["getMemInfo"]).apply(null,arguments)};var _SendMessageFloat=Module["_SendMessageFloat"]=function(){return(_SendMessageFloat=Module["_SendMessageFloat"]=Module["asm"]["SendMessageFloat"]).apply(null,arguments)};var _SendMessageString=Module["_SendMessageString"]=function(){return(_SendMessageString=Module["_SendMessageString"]=Module["asm"]["SendMessageString"]).apply(null,arguments)};var _SendMessage=Module["_SendMessage"]=function(){return(_SendMessage=Module["_SendMessage"]=Module["asm"]["SendMessage"]).apply(null,arguments)};var _SetFullscreen=Module["_SetFullscreen"]=function(){return(_SetFullscreen=Module["_SetFullscreen"]=Module["asm"]["SetFullscreen"]).apply(null,arguments)};var _main=Module["_main"]=function(){return(_main=Module["_main"]=Module["asm"]["main"]).apply(null,arguments)};var ___errno_location=Module["___errno_location"]=function(){return(___errno_location=Module["___errno_location"]=Module["asm"]["__errno_location"]).apply(null,arguments)};var ___dl_seterr=Module["___dl_seterr"]=function(){return(___dl_seterr=Module["___dl_seterr"]=Module["asm"]["__dl_seterr"]).apply(null,arguments)};var _htonl=Module["_htonl"]=function(){return(_htonl=Module["_htonl"]=Module["asm"]["htonl"]).apply(null,arguments)};var _htons=Module["_htons"]=function(){return(_htons=Module["_htons"]=Module["asm"]["htons"]).apply(null,arguments)};var _ntohs=Module["_ntohs"]=function(){return(_ntohs=Module["_ntohs"]=Module["asm"]["ntohs"]).apply(null,arguments)};var _strlen=Module["_strlen"]=function(){return(_strlen=Module["_strlen"]=Module["asm"]["strlen"]).apply(null,arguments)};var _malloc=Module["_malloc"]=function(){return(_malloc=Module["_malloc"]=Module["asm"]["malloc"]).apply(null,arguments)};var _free=Module["_free"]=function(){return(_free=Module["_free"]=Module["asm"]["free"]).apply(null,arguments)};var _emscripten_builtin_memalign=Module["_emscripten_builtin_memalign"]=function(){return(_emscripten_builtin_memalign=Module["_emscripten_builtin_memalign"]=Module["asm"]["emscripten_builtin_memalign"]).apply(null,arguments)};var _setThrew=Module["_setThrew"]=function(){return(_setThrew=Module["_setThrew"]=Module["asm"]["setThrew"]).apply(null,arguments)};var _saveSetjmp=Module["_saveSetjmp"]=function(){return(_saveSetjmp=Module["_saveSetjmp"]=Module["asm"]["saveSetjmp"]).apply(null,arguments)};var stackSave=Module["stackSave"]=function(){return(stackSave=Module["stackSave"]=Module["asm"]["stackSave"]).apply(null,arguments)};var stackRestore=Module["stackRestore"]=function(){return(stackRestore=Module["stackRestore"]=Module["asm"]["stackRestore"]).apply(null,arguments)};var stackAlloc=Module["stackAlloc"]=function(){return(stackAlloc=Module["stackAlloc"]=Module["asm"]["stackAlloc"]).apply(null,arguments)};var ___cxa_can_catch=Module["___cxa_can_catch"]=function(){return(___cxa_can_catch=Module["___cxa_can_catch"]=Module["asm"]["__cxa_can_catch"]).apply(null,arguments)};var ___cxa_is_pointer_type=Module["___cxa_is_pointer_type"]=function(){return(___cxa_is_pointer_type=Module["___cxa_is_pointer_type"]=Module["asm"]["__cxa_is_pointer_type"]).apply(null,arguments)};var dynCall_iidiiii=Module["dynCall_iidiiii"]=function(){return(dynCall_iidiiii=Module["dynCall_iidiiii"]=Module["asm"]["dynCall_iidiiii"]).apply(null,arguments)};var dynCall_vii=Module["dynCall_vii"]=function(){return(dynCall_vii=Module["dynCall_vii"]=Module["asm"]["dynCall_vii"]).apply(null,arguments)};var dynCall_iiii=Module["dynCall_iiii"]=function(){return(dynCall_iiii=Module["dynCall_iiii"]=Module["asm"]["dynCall_iiii"]).apply(null,arguments)};var dynCall_iii=Module["dynCall_iii"]=function(){return(dynCall_iii=Module["dynCall_iii"]=Module["asm"]["dynCall_iii"]).apply(null,arguments)};var dynCall_ii=Module["dynCall_ii"]=function(){return(dynCall_ii=Module["dynCall_ii"]=Module["asm"]["dynCall_ii"]).apply(null,arguments)};var dynCall_jiji=Module["dynCall_jiji"]=function(){return(dynCall_jiji=Module["dynCall_jiji"]=Module["asm"]["dynCall_jiji"]).apply(null,arguments)};var dynCall_vi=Module["dynCall_vi"]=function(){return(dynCall_vi=Module["dynCall_vi"]=Module["asm"]["dynCall_vi"]).apply(null,arguments)};var dynCall_viii=Module["dynCall_viii"]=function(){return(dynCall_viii=Module["dynCall_viii"]=Module["asm"]["dynCall_viii"]).apply(null,arguments)};var dynCall_iiiii=Module["dynCall_iiiii"]=function(){return(dynCall_iiiii=Module["dynCall_iiiii"]=Module["asm"]["dynCall_iiiii"]).apply(null,arguments)};var dynCall_v=Module["dynCall_v"]=function(){return(dynCall_v=Module["dynCall_v"]=Module["asm"]["dynCall_v"]).apply(null,arguments)};var dynCall_i=Module["dynCall_i"]=function(){return(dynCall_i=Module["dynCall_i"]=Module["asm"]["dynCall_i"]).apply(null,arguments)};var dynCall_viiiiii=Module["dynCall_viiiiii"]=function(){return(dynCall_viiiiii=Module["dynCall_viiiiii"]=Module["asm"]["dynCall_viiiiii"]).apply(null,arguments)};var dynCall_viiiii=Module["dynCall_viiiii"]=function(){return(dynCall_viiiii=Module["dynCall_viiiii"]=Module["asm"]["dynCall_viiiii"]).apply(null,arguments)};var dynCall_viiii=Module["dynCall_viiii"]=function(){return(dynCall_viiii=Module["dynCall_viiii"]=Module["asm"]["dynCall_viiii"]).apply(null,arguments)};var dynCall_iiiiii=Module["dynCall_iiiiii"]=function(){return(dynCall_iiiiii=Module["dynCall_iiiiii"]=Module["asm"]["dynCall_iiiiii"]).apply(null,arguments)};var dynCall_iiiiiiii=Module["dynCall_iiiiiiii"]=function(){return(dynCall_iiiiiiii=Module["dynCall_iiiiiiii"]=Module["asm"]["dynCall_iiiiiiii"]).apply(null,arguments)};var dynCall_iiijiii=Module["dynCall_iiijiii"]=function(){return(dynCall_iiijiii=Module["dynCall_iiijiii"]=Module["asm"]["dynCall_iiijiii"]).apply(null,arguments)};var dynCall_iij=Module["dynCall_iij"]=function(){return(dynCall_iij=Module["dynCall_iij"]=Module["asm"]["dynCall_iij"]).apply(null,arguments)};var dynCall_iiiiiii=Module["dynCall_iiiiiii"]=function(){return(dynCall_iiiiiii=Module["dynCall_iiiiiii"]=Module["asm"]["dynCall_iiiiiii"]).apply(null,arguments)};var dynCall_jii=Module["dynCall_jii"]=function(){return(dynCall_jii=Module["dynCall_jii"]=Module["asm"]["dynCall_jii"]).apply(null,arguments)};var dynCall_iiifii=Module["dynCall_iiifii"]=function(){return(dynCall_iiifii=Module["dynCall_iiifii"]=Module["asm"]["dynCall_iiifii"]).apply(null,arguments)};var dynCall_viifi=Module["dynCall_viifi"]=function(){return(dynCall_viifi=Module["dynCall_viifi"]=Module["asm"]["dynCall_viifi"]).apply(null,arguments)};var dynCall_iijiii=Module["dynCall_iijiii"]=function(){return(dynCall_iijiii=Module["dynCall_iijiii"]=Module["asm"]["dynCall_iijiii"]).apply(null,arguments)};var dynCall_vijii=Module["dynCall_vijii"]=function(){return(dynCall_vijii=Module["dynCall_vijii"]=Module["asm"]["dynCall_vijii"]).apply(null,arguments)};var dynCall_iiiijii=Module["dynCall_iiiijii"]=function(){return(dynCall_iiiijii=Module["dynCall_iiiijii"]=Module["asm"]["dynCall_iiiijii"]).apply(null,arguments)};var dynCall_viji=Module["dynCall_viji"]=function(){return(dynCall_viji=Module["dynCall_viji"]=Module["asm"]["dynCall_viji"]).apply(null,arguments)};var dynCall_viiji=Module["dynCall_viiji"]=function(){return(dynCall_viiji=Module["dynCall_viiji"]=Module["asm"]["dynCall_viiji"]).apply(null,arguments)};var dynCall_vidi=Module["dynCall_vidi"]=function(){return(dynCall_vidi=Module["dynCall_vidi"]=Module["asm"]["dynCall_vidi"]).apply(null,arguments)};var dynCall_viidi=Module["dynCall_viidi"]=function(){return(dynCall_viidi=Module["dynCall_viidi"]=Module["asm"]["dynCall_viidi"]).apply(null,arguments)};var dynCall_viiiiiii=Module["dynCall_viiiiiii"]=function(){return(dynCall_viiiiiii=Module["dynCall_viiiiiii"]=Module["asm"]["dynCall_viiiiiii"]).apply(null,arguments)};var dynCall_viiffi=Module["dynCall_viiffi"]=function(){return(dynCall_viiffi=Module["dynCall_viiffi"]=Module["asm"]["dynCall_viiffi"]).apply(null,arguments)};var dynCall_fiii=Module["dynCall_fiii"]=function(){return(dynCall_fiii=Module["dynCall_fiii"]=Module["asm"]["dynCall_fiii"]).apply(null,arguments)};var dynCall_diidi=Module["dynCall_diidi"]=function(){return(dynCall_diidi=Module["dynCall_diidi"]=Module["asm"]["dynCall_diidi"]).apply(null,arguments)};var dynCall_jiiji=Module["dynCall_jiiji"]=function(){return(dynCall_jiiji=Module["dynCall_jiiji"]=Module["asm"]["dynCall_jiiji"]).apply(null,arguments)};var dynCall_fiifi=Module["dynCall_fiifi"]=function(){return(dynCall_fiifi=Module["dynCall_fiifi"]=Module["asm"]["dynCall_fiifi"]).apply(null,arguments)};var dynCall_iiffi=Module["dynCall_iiffi"]=function(){return(dynCall_iiffi=Module["dynCall_iiffi"]=Module["asm"]["dynCall_iiffi"]).apply(null,arguments)};var dynCall_iiiifi=Module["dynCall_iiiifi"]=function(){return(dynCall_iiiifi=Module["dynCall_iiiifi"]=Module["asm"]["dynCall_iiiifi"]).apply(null,arguments)};var dynCall_viiiiiiii=Module["dynCall_viiiiiiii"]=function(){return(dynCall_viiiiiiii=Module["dynCall_viiiiiiii"]=Module["asm"]["dynCall_viiiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiii=Module["dynCall_viiiiiiiii"]=function(){return(dynCall_viiiiiiiii=Module["dynCall_viiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiiii=Module["dynCall_viiiiiiiiii"]=function(){return(dynCall_viiiiiiiiii=Module["dynCall_viiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiiiii=Module["dynCall_viiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiii=Module["dynCall_viiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiii"]).apply(null,arguments)};var dynCall_ji=Module["dynCall_ji"]=function(){return(dynCall_ji=Module["dynCall_ji"]=Module["asm"]["dynCall_ji"]).apply(null,arguments)};var dynCall_jjji=Module["dynCall_jjji"]=function(){return(dynCall_jjji=Module["dynCall_jjji"]=Module["asm"]["dynCall_jjji"]).apply(null,arguments)};var dynCall_dii=Module["dynCall_dii"]=function(){return(dynCall_dii=Module["dynCall_dii"]=Module["asm"]["dynCall_dii"]).apply(null,arguments)};var dynCall_viijiiijiiii=Module["dynCall_viijiiijiiii"]=function(){return(dynCall_viijiiijiiii=Module["dynCall_viijiiijiiii"]=Module["asm"]["dynCall_viijiiijiiii"]).apply(null,arguments)};var dynCall_vifi=Module["dynCall_vifi"]=function(){return(dynCall_vifi=Module["dynCall_vifi"]=Module["asm"]["dynCall_vifi"]).apply(null,arguments)};var dynCall_iifi=Module["dynCall_iifi"]=function(){return(dynCall_iifi=Module["dynCall_iifi"]=Module["asm"]["dynCall_iifi"]).apply(null,arguments)};var dynCall_fffi=Module["dynCall_fffi"]=function(){return(dynCall_fffi=Module["dynCall_fffi"]=Module["asm"]["dynCall_fffi"]).apply(null,arguments)};var dynCall_ijji=Module["dynCall_ijji"]=function(){return(dynCall_ijji=Module["dynCall_ijji"]=Module["asm"]["dynCall_ijji"]).apply(null,arguments)};var dynCall_jji=Module["dynCall_jji"]=function(){return(dynCall_jji=Module["dynCall_jji"]=Module["asm"]["dynCall_jji"]).apply(null,arguments)};var dynCall_dddi=Module["dynCall_dddi"]=function(){return(dynCall_dddi=Module["dynCall_dddi"]=Module["asm"]["dynCall_dddi"]).apply(null,arguments)};var dynCall_jiii=Module["dynCall_jiii"]=function(){return(dynCall_jiii=Module["dynCall_jiii"]=Module["asm"]["dynCall_jiii"]).apply(null,arguments)};var dynCall_diii=Module["dynCall_diii"]=function(){return(dynCall_diii=Module["dynCall_diii"]=Module["asm"]["dynCall_diii"]).apply(null,arguments)};var dynCall_iidi=Module["dynCall_iidi"]=function(){return(dynCall_iidi=Module["dynCall_iidi"]=Module["asm"]["dynCall_iidi"]).apply(null,arguments)};var dynCall_jiiii=Module["dynCall_jiiii"]=function(){return(dynCall_jiiii=Module["dynCall_jiiii"]=Module["asm"]["dynCall_jiiii"]).apply(null,arguments)};var dynCall_diiii=Module["dynCall_diiii"]=function(){return(dynCall_diiii=Module["dynCall_diiii"]=Module["asm"]["dynCall_diiii"]).apply(null,arguments)};var dynCall_viiiji=Module["dynCall_viiiji"]=function(){return(dynCall_viiiji=Module["dynCall_viiiji"]=Module["asm"]["dynCall_viiiji"]).apply(null,arguments)};var dynCall_iiiiiiiii=Module["dynCall_iiiiiiiii"]=function(){return(dynCall_iiiiiiiii=Module["dynCall_iiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiii"]).apply(null,arguments)};var dynCall_jdi=Module["dynCall_jdi"]=function(){return(dynCall_jdi=Module["dynCall_jdi"]=Module["asm"]["dynCall_jdi"]).apply(null,arguments)};var dynCall_vijjji=Module["dynCall_vijjji"]=function(){return(dynCall_vijjji=Module["dynCall_vijjji"]=Module["asm"]["dynCall_vijjji"]).apply(null,arguments)};var dynCall_iiiiij=Module["dynCall_iiiiij"]=function(){return(dynCall_iiiiij=Module["dynCall_iiiiij"]=Module["asm"]["dynCall_iiiiij"]).apply(null,arguments)};var dynCall_iiiiiiiiii=Module["dynCall_iiiiiiiiii"]=function(){return(dynCall_iiiiiiiiii=Module["dynCall_iiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiii"]).apply(null,arguments)};var dynCall_iiji=Module["dynCall_iiji"]=function(){return(dynCall_iiji=Module["dynCall_iiji"]=Module["asm"]["dynCall_iiji"]).apply(null,arguments)};var dynCall_jjii=Module["dynCall_jjii"]=function(){return(dynCall_jjii=Module["dynCall_jjii"]=Module["asm"]["dynCall_jjii"]).apply(null,arguments)};var dynCall_dji=Module["dynCall_dji"]=function(){return(dynCall_dji=Module["dynCall_dji"]=Module["asm"]["dynCall_dji"]).apply(null,arguments)};var dynCall_idi=Module["dynCall_idi"]=function(){return(dynCall_idi=Module["dynCall_idi"]=Module["asm"]["dynCall_idi"]).apply(null,arguments)};var dynCall_iji=Module["dynCall_iji"]=function(){return(dynCall_iji=Module["dynCall_iji"]=Module["asm"]["dynCall_iji"]).apply(null,arguments)};var dynCall_viifii=Module["dynCall_viifii"]=function(){return(dynCall_viifii=Module["dynCall_viifii"]=Module["asm"]["dynCall_viifii"]).apply(null,arguments)};var dynCall_fiiffi=Module["dynCall_fiiffi"]=function(){return(dynCall_fiiffi=Module["dynCall_fiiffi"]=Module["asm"]["dynCall_fiiffi"]).apply(null,arguments)};var dynCall_viiififii=Module["dynCall_viiififii"]=function(){return(dynCall_viiififii=Module["dynCall_viiififii"]=Module["asm"]["dynCall_viiififii"]).apply(null,arguments)};var dynCall_fi=Module["dynCall_fi"]=function(){return(dynCall_fi=Module["dynCall_fi"]=Module["asm"]["dynCall_fi"]).apply(null,arguments)};var dynCall_iiifi=Module["dynCall_iiifi"]=function(){return(dynCall_iiifi=Module["dynCall_iiifi"]=Module["asm"]["dynCall_iiifi"]).apply(null,arguments)};var dynCall_viiiifi=Module["dynCall_viiiifi"]=function(){return(dynCall_viiiifi=Module["dynCall_viiiifi"]=Module["asm"]["dynCall_viiiifi"]).apply(null,arguments)};var dynCall_fii=Module["dynCall_fii"]=function(){return(dynCall_fii=Module["dynCall_fii"]=Module["asm"]["dynCall_fii"]).apply(null,arguments)};var dynCall_iiiiiiiiiiii=Module["dynCall_iiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiii=Module["dynCall_iiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiiifii=Module["dynCall_iiiifii"]=function(){return(dynCall_iiiifii=Module["dynCall_iiiifii"]=Module["asm"]["dynCall_iiiifii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_viiiifii=Module["dynCall_viiiifii"]=function(){return(dynCall_viiiifii=Module["dynCall_viiiifii"]=Module["asm"]["dynCall_viiiifii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiii=Module["dynCall_viiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiii=Module["dynCall_viiiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_jijii=Module["dynCall_jijii"]=function(){return(dynCall_jijii=Module["dynCall_jijii"]=Module["asm"]["dynCall_jijii"]).apply(null,arguments)};var dynCall_iiijii=Module["dynCall_iiijii"]=function(){return(dynCall_iiijii=Module["dynCall_iiijii"]=Module["asm"]["dynCall_iiijii"]).apply(null,arguments)};var dynCall_iijiiii=Module["dynCall_iijiiii"]=function(){return(dynCall_iijiiii=Module["dynCall_iijiiii"]=Module["asm"]["dynCall_iijiiii"]).apply(null,arguments)};var dynCall_jijiii=Module["dynCall_jijiii"]=function(){return(dynCall_jijiii=Module["dynCall_jijiii"]=Module["asm"]["dynCall_jijiii"]).apply(null,arguments)};var dynCall_viijii=Module["dynCall_viijii"]=function(){return(dynCall_viijii=Module["dynCall_viijii"]=Module["asm"]["dynCall_viijii"]).apply(null,arguments)};var dynCall_iijiiiiii=Module["dynCall_iijiiiiii"]=function(){return(dynCall_iijiiiiii=Module["dynCall_iijiiiiii"]=Module["asm"]["dynCall_iijiiiiii"]).apply(null,arguments)};var dynCall_iijjiiiiii=Module["dynCall_iijjiiiiii"]=function(){return(dynCall_iijjiiiiii=Module["dynCall_iijjiiiiii"]=Module["asm"]["dynCall_iijjiiiiii"]).apply(null,arguments)};var dynCall_iiiijjii=Module["dynCall_iiiijjii"]=function(){return(dynCall_iiiijjii=Module["dynCall_iiiijjii"]=Module["asm"]["dynCall_iiiijjii"]).apply(null,arguments)};var dynCall_iijii=Module["dynCall_iijii"]=function(){return(dynCall_iijii=Module["dynCall_iijii"]=Module["asm"]["dynCall_iijii"]).apply(null,arguments)};var dynCall_j=Module["dynCall_j"]=function(){return(dynCall_j=Module["dynCall_j"]=Module["asm"]["dynCall_j"]).apply(null,arguments)};var dynCall_iiiiiiiiiji=Module["dynCall_iiiiiiiiiji"]=function(){return(dynCall_iiiiiiiiiji=Module["dynCall_iiiiiiiiiji"]=Module["asm"]["dynCall_iiiiiiiiiji"]).apply(null,arguments)};var dynCall_vji=Module["dynCall_vji"]=function(){return(dynCall_vji=Module["dynCall_vji"]=Module["asm"]["dynCall_vji"]).apply(null,arguments)};var dynCall_ifi=Module["dynCall_ifi"]=function(){return(dynCall_ifi=Module["dynCall_ifi"]=Module["asm"]["dynCall_ifi"]).apply(null,arguments)};var dynCall_vifii=Module["dynCall_vifii"]=function(){return(dynCall_vifii=Module["dynCall_vifii"]=Module["asm"]["dynCall_vifii"]).apply(null,arguments)};var dynCall_iiiidii=Module["dynCall_iiiidii"]=function(){return(dynCall_iiiidii=Module["dynCall_iiiidii"]=Module["asm"]["dynCall_iiiidii"]).apply(null,arguments)};var dynCall_iijji=Module["dynCall_iijji"]=function(){return(dynCall_iijji=Module["dynCall_iijji"]=Module["asm"]["dynCall_iijji"]).apply(null,arguments)};var dynCall_iiddi=Module["dynCall_iiddi"]=function(){return(dynCall_iiddi=Module["dynCall_iiddi"]=Module["asm"]["dynCall_iiddi"]).apply(null,arguments)};var dynCall_iiiiji=Module["dynCall_iiiiji"]=function(){return(dynCall_iiiiji=Module["dynCall_iiiiji"]=Module["asm"]["dynCall_iiiiji"]).apply(null,arguments)};var dynCall_jidi=Module["dynCall_jidi"]=function(){return(dynCall_jidi=Module["dynCall_jidi"]=Module["asm"]["dynCall_jidi"]).apply(null,arguments)};var dynCall_ddiii=Module["dynCall_ddiii"]=function(){return(dynCall_ddiii=Module["dynCall_ddiii"]=Module["asm"]["dynCall_ddiii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_didi=Module["dynCall_didi"]=function(){return(dynCall_didi=Module["dynCall_didi"]=Module["asm"]["dynCall_didi"]).apply(null,arguments)};var dynCall_fifi=Module["dynCall_fifi"]=function(){return(dynCall_fifi=Module["dynCall_fifi"]=Module["asm"]["dynCall_fifi"]).apply(null,arguments)};var dynCall_vijiii=Module["dynCall_vijiii"]=function(){return(dynCall_vijiii=Module["dynCall_vijiii"]=Module["asm"]["dynCall_vijiii"]).apply(null,arguments)};var dynCall_vjjjiiii=Module["dynCall_vjjjiiii"]=function(){return(dynCall_vjjjiiii=Module["dynCall_vjjjiiii"]=Module["asm"]["dynCall_vjjjiiii"]).apply(null,arguments)};var dynCall_vjiiiii=Module["dynCall_vjiiiii"]=function(){return(dynCall_vjiiiii=Module["dynCall_vjiiiii"]=Module["asm"]["dynCall_vjiiiii"]).apply(null,arguments)};var dynCall_jiiiii=Module["dynCall_jiiiii"]=function(){return(dynCall_jiiiii=Module["dynCall_jiiiii"]=Module["asm"]["dynCall_jiiiii"]).apply(null,arguments)};var dynCall_viffi=Module["dynCall_viffi"]=function(){return(dynCall_viffi=Module["dynCall_viffi"]=Module["asm"]["dynCall_viffi"]).apply(null,arguments)};var dynCall_viiifi=Module["dynCall_viiifi"]=function(){return(dynCall_viiifi=Module["dynCall_viiifi"]=Module["asm"]["dynCall_viiifi"]).apply(null,arguments)};var dynCall_iiiiiiiiifi=Module["dynCall_iiiiiiiiifi"]=function(){return(dynCall_iiiiiiiiifi=Module["dynCall_iiiiiiiiifi"]=Module["asm"]["dynCall_iiiiiiiiifi"]).apply(null,arguments)};var dynCall_iiiiiiiifi=Module["dynCall_iiiiiiiifi"]=function(){return(dynCall_iiiiiiiifi=Module["dynCall_iiiiiiiifi"]=Module["asm"]["dynCall_iiiiiiiifi"]).apply(null,arguments)};var dynCall_ifiiii=Module["dynCall_ifiiii"]=function(){return(dynCall_ifiiii=Module["dynCall_ifiiii"]=Module["asm"]["dynCall_ifiiii"]).apply(null,arguments)};var dynCall_idiiiii=Module["dynCall_idiiiii"]=function(){return(dynCall_idiiiii=Module["dynCall_idiiiii"]=Module["asm"]["dynCall_idiiiii"]).apply(null,arguments)};var dynCall_idiiii=Module["dynCall_idiiii"]=function(){return(dynCall_idiiii=Module["dynCall_idiiii"]=Module["asm"]["dynCall_idiiii"]).apply(null,arguments)};var dynCall_idii=Module["dynCall_idii"]=function(){return(dynCall_idii=Module["dynCall_idii"]=Module["asm"]["dynCall_idii"]).apply(null,arguments)};var dynCall_vijiiii=Module["dynCall_vijiiii"]=function(){return(dynCall_vijiiii=Module["dynCall_vijiiii"]=Module["asm"]["dynCall_vijiiii"]).apply(null,arguments)};var dynCall_iiijiiii=Module["dynCall_iiijiiii"]=function(){return(dynCall_iiijiiii=Module["dynCall_iiijiiii"]=Module["asm"]["dynCall_iiijiiii"]).apply(null,arguments)};var dynCall_iiiji=Module["dynCall_iiiji"]=function(){return(dynCall_iiiji=Module["dynCall_iiiji"]=Module["asm"]["dynCall_iiiji"]).apply(null,arguments)};var dynCall_vjiiii=Module["dynCall_vjiiii"]=function(){return(dynCall_vjiiii=Module["dynCall_vjiiii"]=Module["asm"]["dynCall_vjiiii"]).apply(null,arguments)};var dynCall_iddi=Module["dynCall_iddi"]=function(){return(dynCall_iddi=Module["dynCall_iddi"]=Module["asm"]["dynCall_iddi"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiiiiiiiiii=Module["dynCall_iiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiii=Module["dynCall_iiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_viiijii=Module["dynCall_viiijii"]=function(){return(dynCall_viiijii=Module["dynCall_viiijii"]=Module["asm"]["dynCall_viiijii"]).apply(null,arguments)};var dynCall_viijiii=Module["dynCall_viijiii"]=function(){return(dynCall_viijiii=Module["dynCall_viijiii"]=Module["asm"]["dynCall_viijiii"]).apply(null,arguments)};var dynCall_ijii=Module["dynCall_ijii"]=function(){return(dynCall_ijii=Module["dynCall_ijii"]=Module["asm"]["dynCall_ijii"]).apply(null,arguments)};var dynCall_iiiiiji=Module["dynCall_iiiiiji"]=function(){return(dynCall_iiiiiji=Module["dynCall_iiiiiji"]=Module["asm"]["dynCall_iiiiiji"]).apply(null,arguments)};var dynCall_ijjiiii=Module["dynCall_ijjiiii"]=function(){return(dynCall_ijjiiii=Module["dynCall_ijjiiii"]=Module["asm"]["dynCall_ijjiiii"]).apply(null,arguments)};var dynCall_vdiiiii=Module["dynCall_vdiiiii"]=function(){return(dynCall_vdiiiii=Module["dynCall_vdiiiii"]=Module["asm"]["dynCall_vdiiiii"]).apply(null,arguments)};var dynCall_diiji=Module["dynCall_diiji"]=function(){return(dynCall_diiji=Module["dynCall_diiji"]=Module["asm"]["dynCall_diiji"]).apply(null,arguments)};var dynCall_vjiiiiiiii=Module["dynCall_vjiiiiiiii"]=function(){return(dynCall_vjiiiiiiii=Module["dynCall_vjiiiiiiii"]=Module["asm"]["dynCall_vjiiiiiiii"]).apply(null,arguments)};var dynCall_vjiiiiiii=Module["dynCall_vjiiiiiii"]=function(){return(dynCall_vjiiiiiii=Module["dynCall_vjiiiiiii"]=Module["asm"]["dynCall_vjiiiiiii"]).apply(null,arguments)};var dynCall_ijiiii=Module["dynCall_ijiiii"]=function(){return(dynCall_ijiiii=Module["dynCall_ijiiii"]=Module["asm"]["dynCall_ijiiii"]).apply(null,arguments)};var dynCall_iidii=Module["dynCall_iidii"]=function(){return(dynCall_iidii=Module["dynCall_iidii"]=Module["asm"]["dynCall_iidii"]).apply(null,arguments)};var dynCall_iifii=Module["dynCall_iifii"]=function(){return(dynCall_iifii=Module["dynCall_iifii"]=Module["asm"]["dynCall_iifii"]).apply(null,arguments)};var dynCall_iidiii=Module["dynCall_iidiii"]=function(){return(dynCall_iidiii=Module["dynCall_iidiii"]=Module["asm"]["dynCall_iidiii"]).apply(null,arguments)};var dynCall_diji=Module["dynCall_diji"]=function(){return(dynCall_diji=Module["dynCall_diji"]=Module["asm"]["dynCall_diji"]).apply(null,arguments)};var dynCall_fidi=Module["dynCall_fidi"]=function(){return(dynCall_fidi=Module["dynCall_fidi"]=Module["asm"]["dynCall_fidi"]).apply(null,arguments)};var dynCall_ijjiii=Module["dynCall_ijjiii"]=function(){return(dynCall_ijjiii=Module["dynCall_ijjiii"]=Module["asm"]["dynCall_ijjiii"]).apply(null,arguments)};var dynCall_viffffi=Module["dynCall_viffffi"]=function(){return(dynCall_viffffi=Module["dynCall_viffffi"]=Module["asm"]["dynCall_viffffi"]).apply(null,arguments)};var dynCall_diiiii=Module["dynCall_diiiii"]=function(){return(dynCall_diiiii=Module["dynCall_diiiii"]=Module["asm"]["dynCall_diiiii"]).apply(null,arguments)};var dynCall_vijji=Module["dynCall_vijji"]=function(){return(dynCall_vijji=Module["dynCall_vijji"]=Module["asm"]["dynCall_vijji"]).apply(null,arguments)};var dynCall_vfffi=Module["dynCall_vfffi"]=function(){return(dynCall_vfffi=Module["dynCall_vfffi"]=Module["asm"]["dynCall_vfffi"]).apply(null,arguments)};var dynCall_vffi=Module["dynCall_vffi"]=function(){return(dynCall_vffi=Module["dynCall_vffi"]=Module["asm"]["dynCall_vffi"]).apply(null,arguments)};var dynCall_vffffi=Module["dynCall_vffffi"]=function(){return(dynCall_vffffi=Module["dynCall_vffffi"]=Module["asm"]["dynCall_vffffi"]).apply(null,arguments)};var dynCall_viiiiffi=Module["dynCall_viiiiffi"]=function(){return(dynCall_viiiiffi=Module["dynCall_viiiiffi"]=Module["asm"]["dynCall_viiiiffi"]).apply(null,arguments)};var dynCall_viiiffii=Module["dynCall_viiiffii"]=function(){return(dynCall_viiiffii=Module["dynCall_viiiffii"]=Module["asm"]["dynCall_viiiffii"]).apply(null,arguments)};var dynCall_vifffi=Module["dynCall_vifffi"]=function(){return(dynCall_vifffi=Module["dynCall_vifffi"]=Module["asm"]["dynCall_vifffi"]).apply(null,arguments)};var dynCall_viffffffi=Module["dynCall_viffffffi"]=function(){return(dynCall_viffffffi=Module["dynCall_viffffffi"]=Module["asm"]["dynCall_viffffffi"]).apply(null,arguments)};var dynCall_fiiii=Module["dynCall_fiiii"]=function(){return(dynCall_fiiii=Module["dynCall_fiiii"]=Module["asm"]["dynCall_fiiii"]).apply(null,arguments)};var dynCall_vffffffii=Module["dynCall_vffffffii"]=function(){return(dynCall_vffffffii=Module["dynCall_vffffffii"]=Module["asm"]["dynCall_vffffffii"]).apply(null,arguments)};var dynCall_vfiii=Module["dynCall_vfiii"]=function(){return(dynCall_vfiii=Module["dynCall_vfiii"]=Module["asm"]["dynCall_vfiii"]).apply(null,arguments)};var dynCall_ffi=Module["dynCall_ffi"]=function(){return(dynCall_ffi=Module["dynCall_ffi"]=Module["asm"]["dynCall_ffi"]).apply(null,arguments)};var dynCall_ffffi=Module["dynCall_ffffi"]=function(){return(dynCall_ffffi=Module["dynCall_ffffi"]=Module["asm"]["dynCall_ffffi"]).apply(null,arguments)};var dynCall_iffi=Module["dynCall_iffi"]=function(){return(dynCall_iffi=Module["dynCall_iffi"]=Module["asm"]["dynCall_iffi"]).apply(null,arguments)};var dynCall_fffifffi=Module["dynCall_fffifffi"]=function(){return(dynCall_fffifffi=Module["dynCall_fffifffi"]=Module["asm"]["dynCall_fffifffi"]).apply(null,arguments)};var dynCall_fdi=Module["dynCall_fdi"]=function(){return(dynCall_fdi=Module["dynCall_fdi"]=Module["asm"]["dynCall_fdi"]).apply(null,arguments)};var dynCall_ddi=Module["dynCall_ddi"]=function(){return(dynCall_ddi=Module["dynCall_ddi"]=Module["asm"]["dynCall_ddi"]).apply(null,arguments)};var dynCall_vfii=Module["dynCall_vfii"]=function(){return(dynCall_vfii=Module["dynCall_vfii"]=Module["asm"]["dynCall_vfii"]).apply(null,arguments)};var dynCall_ddddi=Module["dynCall_ddddi"]=function(){return(dynCall_ddddi=Module["dynCall_ddddi"]=Module["asm"]["dynCall_ddddi"]).apply(null,arguments)};var dynCall_jjjji=Module["dynCall_jjjji"]=function(){return(dynCall_jjjji=Module["dynCall_jjjji"]=Module["asm"]["dynCall_jjjji"]).apply(null,arguments)};var dynCall_vijjii=Module["dynCall_vijjii"]=function(){return(dynCall_vijjii=Module["dynCall_vijjii"]=Module["asm"]["dynCall_vijjii"]).apply(null,arguments)};var dynCall_viiifii=Module["dynCall_viiifii"]=function(){return(dynCall_viiifii=Module["dynCall_viiifii"]=Module["asm"]["dynCall_viiifii"]).apply(null,arguments)};var dynCall_viiiiiiiijijiii=Module["dynCall_viiiiiiiijijiii"]=function(){return(dynCall_viiiiiiiijijiii=Module["dynCall_viiiiiiiijijiii"]=Module["asm"]["dynCall_viiiiiiiijijiii"]).apply(null,arguments)};var dynCall_viiiififfi=Module["dynCall_viiiififfi"]=function(){return(dynCall_viiiififfi=Module["dynCall_viiiififfi"]=Module["asm"]["dynCall_viiiififfi"]).apply(null,arguments)};var dynCall_viiiifiifi=Module["dynCall_viiiifiifi"]=function(){return(dynCall_viiiifiifi=Module["dynCall_viiiifiifi"]=Module["asm"]["dynCall_viiiifiifi"]).apply(null,arguments)};var dynCall_viiiifiiii=Module["dynCall_viiiifiiii"]=function(){return(dynCall_viiiifiiii=Module["dynCall_viiiifiiii"]=Module["asm"]["dynCall_viiiifiiii"]).apply(null,arguments)};var dynCall_viiiifiiiii=Module["dynCall_viiiifiiiii"]=function(){return(dynCall_viiiifiiiii=Module["dynCall_viiiifiiiii"]=Module["asm"]["dynCall_viiiifiiiii"]).apply(null,arguments)};var dynCall_viiiifiiiiiiii=Module["dynCall_viiiifiiiiiiii"]=function(){return(dynCall_viiiifiiiiiiii=Module["dynCall_viiiifiiiiiiii"]=Module["asm"]["dynCall_viiiifiiiiiiii"]).apply(null,arguments)};var dynCall_iiifiii=Module["dynCall_iiifiii"]=function(){return(dynCall_iiifiii=Module["dynCall_iiifiii"]=Module["asm"]["dynCall_iiifiii"]).apply(null,arguments)};var dynCall_viiiiiffii=Module["dynCall_viiiiiffii"]=function(){return(dynCall_viiiiiffii=Module["dynCall_viiiiiffii"]=Module["asm"]["dynCall_viiiiiffii"]).apply(null,arguments)};var dynCall_viffffii=Module["dynCall_viffffii"]=function(){return(dynCall_viffffii=Module["dynCall_viffffii"]=Module["asm"]["dynCall_viffffii"]).apply(null,arguments)};var dynCall_iiiifiii=Module["dynCall_iiiifiii"]=function(){return(dynCall_iiiifiii=Module["dynCall_iiiifiii"]=Module["asm"]["dynCall_iiiifiii"]).apply(null,arguments)};var dynCall_iifiii=Module["dynCall_iifiii"]=function(){return(dynCall_iifiii=Module["dynCall_iifiii"]=Module["asm"]["dynCall_iifiii"]).apply(null,arguments)};var dynCall_iifiiii=Module["dynCall_iifiiii"]=function(){return(dynCall_iifiiii=Module["dynCall_iifiiii"]=Module["asm"]["dynCall_iifiiii"]).apply(null,arguments)};var dynCall_iiiiifiii=Module["dynCall_iiiiifiii"]=function(){return(dynCall_iiiiifiii=Module["dynCall_iiiiifiii"]=Module["asm"]["dynCall_iiiiifiii"]).apply(null,arguments)};var dynCall_iiifiiii=Module["dynCall_iiifiiii"]=function(){return(dynCall_iiifiiii=Module["dynCall_iiifiiii"]=Module["asm"]["dynCall_iiifiiii"]).apply(null,arguments)};var dynCall_vifffffi=Module["dynCall_vifffffi"]=function(){return(dynCall_vifffffi=Module["dynCall_vifffffi"]=Module["asm"]["dynCall_vifffffi"]).apply(null,arguments)};var dynCall_viiiiifi=Module["dynCall_viiiiifi"]=function(){return(dynCall_viiiiifi=Module["dynCall_viiiiifi"]=Module["asm"]["dynCall_viiiiifi"]).apply(null,arguments)};var dynCall_viffiiii=Module["dynCall_viffiiii"]=function(){return(dynCall_viffiiii=Module["dynCall_viffiiii"]=Module["asm"]["dynCall_viffiiii"]).apply(null,arguments)};var dynCall_viiiffffiiii=Module["dynCall_viiiffffiiii"]=function(){return(dynCall_viiiffffiiii=Module["dynCall_viiiffffiiii"]=Module["asm"]["dynCall_viiiffffiiii"]).apply(null,arguments)};var dynCall_viifffffffiiiii=Module["dynCall_viifffffffiiiii"]=function(){return(dynCall_viifffffffiiiii=Module["dynCall_viifffffffiiiii"]=Module["asm"]["dynCall_viifffffffiiiii"]).apply(null,arguments)};var dynCall_fiiiii=Module["dynCall_fiiiii"]=function(){return(dynCall_fiiiii=Module["dynCall_fiiiii"]=Module["asm"]["dynCall_fiiiii"]).apply(null,arguments)};var dynCall_iiiiiiffiiiiiiiiiffffiiii=Module["dynCall_iiiiiiffiiiiiiiiiffffiiii"]=function(){return(dynCall_iiiiiiffiiiiiiiiiffffiiii=Module["dynCall_iiiiiiffiiiiiiiiiffffiiii"]=Module["asm"]["dynCall_iiiiiiffiiiiiiiiiffffiiii"]).apply(null,arguments)};var dynCall_iiiiiiffiiiiiiiiiiiiiii=Module["dynCall_iiiiiiffiiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiffiiiiiiiiiiiiiii=Module["dynCall_iiiiiiffiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiffiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_viffii=Module["dynCall_viffii"]=function(){return(dynCall_viffii=Module["dynCall_viffii"]=Module["asm"]["dynCall_viffii"]).apply(null,arguments)};var dynCall_vififiii=Module["dynCall_vififiii"]=function(){return(dynCall_vififiii=Module["dynCall_vififiii"]=Module["asm"]["dynCall_vififiii"]).apply(null,arguments)};var dynCall_viififii=Module["dynCall_viififii"]=function(){return(dynCall_viififii=Module["dynCall_viififii"]=Module["asm"]["dynCall_viififii"]).apply(null,arguments)};var dynCall_fiffi=Module["dynCall_fiffi"]=function(){return(dynCall_fiffi=Module["dynCall_fiffi"]=Module["asm"]["dynCall_fiffi"]).apply(null,arguments)};var dynCall_viijji=Module["dynCall_viijji"]=function(){return(dynCall_viijji=Module["dynCall_viijji"]=Module["asm"]["dynCall_viijji"]).apply(null,arguments)};var dynCall_viiidi=Module["dynCall_viiidi"]=function(){return(dynCall_viiidi=Module["dynCall_viiidi"]=Module["asm"]["dynCall_viiidi"]).apply(null,arguments)};var dynCall_jijji=Module["dynCall_jijji"]=function(){return(dynCall_jijji=Module["dynCall_jijji"]=Module["asm"]["dynCall_jijji"]).apply(null,arguments)};var dynCall_viiffffi=Module["dynCall_viiffffi"]=function(){return(dynCall_viiffffi=Module["dynCall_viiffffi"]=Module["asm"]["dynCall_viiffffi"]).apply(null,arguments)};var dynCall_fifffi=Module["dynCall_fifffi"]=function(){return(dynCall_fifffi=Module["dynCall_fifffi"]=Module["asm"]["dynCall_fifffi"]).apply(null,arguments)};var dynCall_ifffi=Module["dynCall_ifffi"]=function(){return(dynCall_ifffi=Module["dynCall_ifffi"]=Module["asm"]["dynCall_ifffi"]).apply(null,arguments)};var dynCall_viffiii=Module["dynCall_viffiii"]=function(){return(dynCall_viffiii=Module["dynCall_viffiii"]=Module["asm"]["dynCall_viffiii"]).apply(null,arguments)};var dynCall_viffifi=Module["dynCall_viffifi"]=function(){return(dynCall_viffifi=Module["dynCall_viffifi"]=Module["asm"]["dynCall_viffifi"]).apply(null,arguments)};var dynCall_fiffffi=Module["dynCall_fiffffi"]=function(){return(dynCall_fiffffi=Module["dynCall_fiffffi"]=Module["asm"]["dynCall_fiffffi"]).apply(null,arguments)};var dynCall_fffffffi=Module["dynCall_fffffffi"]=function(){return(dynCall_fffffffi=Module["dynCall_fffffffi"]=Module["asm"]["dynCall_fffffffi"]).apply(null,arguments)};var dynCall_viiffifi=Module["dynCall_viiffifi"]=function(){return(dynCall_viiffifi=Module["dynCall_viiffifi"]=Module["asm"]["dynCall_viiffifi"]).apply(null,arguments)};var dynCall_viiiffiiiiiiiii=Module["dynCall_viiiffiiiiiiiii"]=function(){return(dynCall_viiiffiiiiiiiii=Module["dynCall_viiiffiiiiiiiii"]=Module["asm"]["dynCall_viiiffiiiiiiiii"]).apply(null,arguments)};var dynCall_viiiffiiiiii=Module["dynCall_viiiffiiiiii"]=function(){return(dynCall_viiiffiiiiii=Module["dynCall_viiiffiiiiii"]=Module["asm"]["dynCall_viiiffiiiiii"]).apply(null,arguments)};var dynCall_viiffiiiiiiiiii=Module["dynCall_viiffiiiiiiiiii"]=function(){return(dynCall_viiffiiiiiiiiii=Module["dynCall_viiffiiiiiiiiii"]=Module["asm"]["dynCall_viiffiiiiiiiiii"]).apply(null,arguments)};var dynCall_viiffiiiiiii=Module["dynCall_viiffiiiiiii"]=function(){return(dynCall_viiffiiiiiii=Module["dynCall_viiffiiiiiii"]=Module["asm"]["dynCall_viiffiiiiiii"]).apply(null,arguments)};var dynCall_iiiffiiii=Module["dynCall_iiiffiiii"]=function(){return(dynCall_iiiffiiii=Module["dynCall_iiiffiiii"]=Module["asm"]["dynCall_iiiffiiii"]).apply(null,arguments)};var dynCall_fffffi=Module["dynCall_fffffi"]=function(){return(dynCall_fffffi=Module["dynCall_fffffi"]=Module["asm"]["dynCall_fffffi"]).apply(null,arguments)};var dynCall_iiiiffiiii=Module["dynCall_iiiiffiiii"]=function(){return(dynCall_iiiiffiiii=Module["dynCall_iiiiffiiii"]=Module["asm"]["dynCall_iiiiffiiii"]).apply(null,arguments)};var dynCall_fiiiffi=Module["dynCall_fiiiffi"]=function(){return(dynCall_fiiiffi=Module["dynCall_fiiiffi"]=Module["asm"]["dynCall_fiiiffi"]).apply(null,arguments)};var dynCall_vjii=Module["dynCall_vjii"]=function(){return(dynCall_vjii=Module["dynCall_vjii"]=Module["asm"]["dynCall_vjii"]).apply(null,arguments)};var dynCall_viiiiiiiijiiii=Module["dynCall_viiiiiiiijiiii"]=function(){return(dynCall_viiiiiiiijiiii=Module["dynCall_viiiiiiiijiiii"]=Module["asm"]["dynCall_viiiiiiiijiiii"]).apply(null,arguments)};var dynCall_viiiiiifiiiiii=Module["dynCall_viiiiiifiiiiii"]=function(){return(dynCall_viiiiiifiiiiii=Module["dynCall_viiiiiifiiiiii"]=Module["asm"]["dynCall_viiiiiifiiiiii"]).apply(null,arguments)};var dynCall_viffffiii=Module["dynCall_viffffiii"]=function(){return(dynCall_viffffiii=Module["dynCall_viffffiii"]=Module["asm"]["dynCall_viffffiii"]).apply(null,arguments)};var dynCall_viifiii=Module["dynCall_viifiii"]=function(){return(dynCall_viifiii=Module["dynCall_viifiii"]=Module["asm"]["dynCall_viifiii"]).apply(null,arguments)};var dynCall_vifiiiiii=Module["dynCall_vifiiiiii"]=function(){return(dynCall_vifiiiiii=Module["dynCall_vifiiiiii"]=Module["asm"]["dynCall_vifiiiiii"]).apply(null,arguments)};var dynCall_ffii=Module["dynCall_ffii"]=function(){return(dynCall_ffii=Module["dynCall_ffii"]=Module["asm"]["dynCall_ffii"]).apply(null,arguments)};var dynCall_viifiiii=Module["dynCall_viifiiii"]=function(){return(dynCall_viifiiii=Module["dynCall_viifiiii"]=Module["asm"]["dynCall_viifiiii"]).apply(null,arguments)};var dynCall_fifii=Module["dynCall_fifii"]=function(){return(dynCall_fifii=Module["dynCall_fifii"]=Module["asm"]["dynCall_fifii"]).apply(null,arguments)};var dynCall_vifffii=Module["dynCall_vifffii"]=function(){return(dynCall_vifffii=Module["dynCall_vifffii"]=Module["asm"]["dynCall_vifffii"]).apply(null,arguments)};var dynCall_viiiffi=Module["dynCall_viiiffi"]=function(){return(dynCall_viiiffi=Module["dynCall_viiiffi"]=Module["asm"]["dynCall_viiiffi"]).apply(null,arguments)};var dynCall_viiifffi=Module["dynCall_viiifffi"]=function(){return(dynCall_viiifffi=Module["dynCall_viiifffi"]=Module["asm"]["dynCall_viiifffi"]).apply(null,arguments)};var dynCall_fiifii=Module["dynCall_fiifii"]=function(){return(dynCall_fiifii=Module["dynCall_fiifii"]=Module["asm"]["dynCall_fiifii"]).apply(null,arguments)};var dynCall_iiiifiiii=Module["dynCall_iiiifiiii"]=function(){return(dynCall_iiiifiiii=Module["dynCall_iiiifiiii"]=Module["asm"]["dynCall_iiiifiiii"]).apply(null,arguments)};var dynCall_viiiiiffi=Module["dynCall_viiiiiffi"]=function(){return(dynCall_viiiiiffi=Module["dynCall_viiiiiffi"]=Module["asm"]["dynCall_viiiiiffi"]).apply(null,arguments)};var dynCall_iifffi=Module["dynCall_iifffi"]=function(){return(dynCall_iifffi=Module["dynCall_iifffi"]=Module["asm"]["dynCall_iifffi"]).apply(null,arguments)};var dynCall_viijjii=Module["dynCall_viijjii"]=function(){return(dynCall_viijjii=Module["dynCall_viijjii"]=Module["asm"]["dynCall_viijjii"]).apply(null,arguments)};var dynCall_viiiiifii=Module["dynCall_viiiiifii"]=function(){return(dynCall_viiiiifii=Module["dynCall_viiiiifii"]=Module["asm"]["dynCall_viiiiifii"]).apply(null,arguments)};var dynCall_viiiffffi=Module["dynCall_viiiffffi"]=function(){return(dynCall_viiiffffi=Module["dynCall_viiiffffi"]=Module["asm"]["dynCall_viiiffffi"]).apply(null,arguments)};var dynCall_vidii=Module["dynCall_vidii"]=function(){return(dynCall_vidii=Module["dynCall_vidii"]=Module["asm"]["dynCall_vidii"]).apply(null,arguments)};var dynCall_vijiiiiiiii=Module["dynCall_vijiiiiiiii"]=function(){return(dynCall_vijiiiiiiii=Module["dynCall_vijiiiiiiii"]=Module["asm"]["dynCall_vijiiiiiiii"]).apply(null,arguments)};var dynCall_vijiiiijjjjji=Module["dynCall_vijiiiijjjjji"]=function(){return(dynCall_vijiiiijjjjji=Module["dynCall_vijiiiijjjjji"]=Module["asm"]["dynCall_vijiiiijjjjji"]).apply(null,arguments)};var dynCall_jiidi=Module["dynCall_jiidi"]=function(){return(dynCall_jiidi=Module["dynCall_jiidi"]=Module["asm"]["dynCall_jiidi"]).apply(null,arguments)};var dynCall_viidii=Module["dynCall_viidii"]=function(){return(dynCall_viidii=Module["dynCall_viidii"]=Module["asm"]["dynCall_viidii"]).apply(null,arguments)};var dynCall_viiidiii=Module["dynCall_viiidiii"]=function(){return(dynCall_viiidiii=Module["dynCall_viiidiii"]=Module["asm"]["dynCall_viiidiii"]).apply(null,arguments)};var dynCall_vijiiiiiii=Module["dynCall_vijiiiiiii"]=function(){return(dynCall_vijiiiiiii=Module["dynCall_vijiiiiiii"]=Module["asm"]["dynCall_vijiiiiiii"]).apply(null,arguments)};var dynCall_jjiiii=Module["dynCall_jjiiii"]=function(){return(dynCall_jjiiii=Module["dynCall_jjiiii"]=Module["asm"]["dynCall_jjiiii"]).apply(null,arguments)};var dynCall_jjiiiii=Module["dynCall_jjiiiii"]=function(){return(dynCall_jjiiiii=Module["dynCall_jjiiiii"]=Module["asm"]["dynCall_jjiiiii"]).apply(null,arguments)};var dynCall_viijiiiiii=Module["dynCall_viijiiiiii"]=function(){return(dynCall_viijiiiiii=Module["dynCall_viijiiiiii"]=Module["asm"]["dynCall_viijiiiiii"]).apply(null,arguments)};var dynCall_jijjji=Module["dynCall_jijjji"]=function(){return(dynCall_jijjji=Module["dynCall_jijjji"]=Module["asm"]["dynCall_jijjji"]).apply(null,arguments)};var dynCall_jijjjii=Module["dynCall_jijjjii"]=function(){return(dynCall_jijjjii=Module["dynCall_jijjjii"]=Module["asm"]["dynCall_jijjjii"]).apply(null,arguments)};var dynCall_jjiii=Module["dynCall_jjiii"]=function(){return(dynCall_jjiii=Module["dynCall_jjiii"]=Module["asm"]["dynCall_jjiii"]).apply(null,arguments)};var dynCall_ijijiiiii=Module["dynCall_ijijiiiii"]=function(){return(dynCall_ijijiiiii=Module["dynCall_ijijiiiii"]=Module["asm"]["dynCall_ijijiiiii"]).apply(null,arguments)};var dynCall_ijjjiii=Module["dynCall_ijjjiii"]=function(){return(dynCall_ijjjiii=Module["dynCall_ijjjiii"]=Module["asm"]["dynCall_ijjjiii"]).apply(null,arguments)};var dynCall_ijiii=Module["dynCall_ijiii"]=function(){return(dynCall_ijiii=Module["dynCall_ijiii"]=Module["asm"]["dynCall_ijiii"]).apply(null,arguments)};var dynCall_vijjjiijii=Module["dynCall_vijjjiijii"]=function(){return(dynCall_vijjjiijii=Module["dynCall_vijjjiijii"]=Module["asm"]["dynCall_vijjjiijii"]).apply(null,arguments)};var dynCall_ijjjiijii=Module["dynCall_ijjjiijii"]=function(){return(dynCall_ijjjiijii=Module["dynCall_ijjjiijii"]=Module["asm"]["dynCall_ijjjiijii"]).apply(null,arguments)};var dynCall_vijiiiiii=Module["dynCall_vijiiiiii"]=function(){return(dynCall_vijiiiiii=Module["dynCall_vijiiiiii"]=Module["asm"]["dynCall_vijiiiiii"]).apply(null,arguments)};var dynCall_jfi=Module["dynCall_jfi"]=function(){return(dynCall_jfi=Module["dynCall_jfi"]=Module["asm"]["dynCall_jfi"]).apply(null,arguments)};var dynCall_fji=Module["dynCall_fji"]=function(){return(dynCall_fji=Module["dynCall_fji"]=Module["asm"]["dynCall_fji"]).apply(null,arguments)};var dynCall_dfi=Module["dynCall_dfi"]=function(){return(dynCall_dfi=Module["dynCall_dfi"]=Module["asm"]["dynCall_dfi"]).apply(null,arguments)};var dynCall_jidii=Module["dynCall_jidii"]=function(){return(dynCall_jidii=Module["dynCall_jidii"]=Module["asm"]["dynCall_jidii"]).apply(null,arguments)};var dynCall_viiiiiiiji=Module["dynCall_viiiiiiiji"]=function(){return(dynCall_viiiiiiiji=Module["dynCall_viiiiiiiji"]=Module["asm"]["dynCall_viiiiiiiji"]).apply(null,arguments)};var dynCall_viiiiiiiiji=Module["dynCall_viiiiiiiiji"]=function(){return(dynCall_viiiiiiiiji=Module["dynCall_viiiiiiiiji"]=Module["asm"]["dynCall_viiiiiiiiji"]).apply(null,arguments)};var dynCall_viiiiiiiiiji=Module["dynCall_viiiiiiiiiji"]=function(){return(dynCall_viiiiiiiiiji=Module["dynCall_viiiiiiiiiji"]=Module["asm"]["dynCall_viiiiiiiiiji"]).apply(null,arguments)};var dynCall_ijiijii=Module["dynCall_ijiijii"]=function(){return(dynCall_ijiijii=Module["dynCall_ijiijii"]=Module["asm"]["dynCall_ijiijii"]).apply(null,arguments)};var dynCall_vjjiiiii=Module["dynCall_vjjiiiii"]=function(){return(dynCall_vjjiiiii=Module["dynCall_vjjiiiii"]=Module["asm"]["dynCall_vjjiiiii"]).apply(null,arguments)};var dynCall_vjjii=Module["dynCall_vjjii"]=function(){return(dynCall_vjjii=Module["dynCall_vjjii"]=Module["asm"]["dynCall_vjjii"]).apply(null,arguments)};var dynCall_ijiiji=Module["dynCall_ijiiji"]=function(){return(dynCall_ijiiji=Module["dynCall_ijiiji"]=Module["asm"]["dynCall_ijiiji"]).apply(null,arguments)};var dynCall_ijiiiii=Module["dynCall_ijiiiii"]=function(){return(dynCall_ijiiiii=Module["dynCall_ijiiiii"]=Module["asm"]["dynCall_ijiiiii"]).apply(null,arguments)};var dynCall_ijiiiiji=Module["dynCall_ijiiiiji"]=function(){return(dynCall_ijiiiiji=Module["dynCall_ijiiiiji"]=Module["asm"]["dynCall_ijiiiiji"]).apply(null,arguments)};var dynCall_jiiiiii=Module["dynCall_jiiiiii"]=function(){return(dynCall_jiiiiii=Module["dynCall_jiiiiii"]=Module["asm"]["dynCall_jiiiiii"]).apply(null,arguments)};var dynCall_ddii=Module["dynCall_ddii"]=function(){return(dynCall_ddii=Module["dynCall_ddii"]=Module["asm"]["dynCall_ddii"]).apply(null,arguments)};var dynCall_idiii=Module["dynCall_idiii"]=function(){return(dynCall_idiii=Module["dynCall_idiii"]=Module["asm"]["dynCall_idiii"]).apply(null,arguments)};var dynCall_ifiii=Module["dynCall_ifiii"]=function(){return(dynCall_ifiii=Module["dynCall_ifiii"]=Module["asm"]["dynCall_ifiii"]).apply(null,arguments)};var dynCall_ifiiiii=Module["dynCall_ifiiiii"]=function(){return(dynCall_ifiiiii=Module["dynCall_ifiiiii"]=Module["asm"]["dynCall_ifiiiii"]).apply(null,arguments)};var dynCall_jjjii=Module["dynCall_jjjii"]=function(){return(dynCall_jjjii=Module["dynCall_jjjii"]=Module["asm"]["dynCall_jjjii"]).apply(null,arguments)};var dynCall_vdiii=Module["dynCall_vdiii"]=function(){return(dynCall_vdiii=Module["dynCall_vdiii"]=Module["asm"]["dynCall_vdiii"]).apply(null,arguments)};var dynCall_jdii=Module["dynCall_jdii"]=function(){return(dynCall_jdii=Module["dynCall_jdii"]=Module["asm"]["dynCall_jdii"]).apply(null,arguments)};var dynCall_vijijji=Module["dynCall_vijijji"]=function(){return(dynCall_vijijji=Module["dynCall_vijijji"]=Module["asm"]["dynCall_vijijji"]).apply(null,arguments)};var dynCall_iijjji=Module["dynCall_iijjji"]=function(){return(dynCall_iijjji=Module["dynCall_iijjji"]=Module["asm"]["dynCall_iijjji"]).apply(null,arguments)};var dynCall_viijjji=Module["dynCall_viijjji"]=function(){return(dynCall_viijjji=Module["dynCall_viijjji"]=Module["asm"]["dynCall_viijjji"]).apply(null,arguments)};var dynCall_vdii=Module["dynCall_vdii"]=function(){return(dynCall_vdii=Module["dynCall_vdii"]=Module["asm"]["dynCall_vdii"]).apply(null,arguments)};var dynCall_diddi=Module["dynCall_diddi"]=function(){return(dynCall_diddi=Module["dynCall_diddi"]=Module["asm"]["dynCall_diddi"]).apply(null,arguments)};var dynCall_viiiijii=Module["dynCall_viiiijii"]=function(){return(dynCall_viiiijii=Module["dynCall_viiiijii"]=Module["asm"]["dynCall_viiiijii"]).apply(null,arguments)};var dynCall_viiijji=Module["dynCall_viiijji"]=function(){return(dynCall_viiijji=Module["dynCall_viiijji"]=Module["asm"]["dynCall_viiijji"]).apply(null,arguments)};var dynCall_iijjii=Module["dynCall_iijjii"]=function(){return(dynCall_iijjii=Module["dynCall_iijjii"]=Module["asm"]["dynCall_iijjii"]).apply(null,arguments)};var dynCall_viijijii=Module["dynCall_viijijii"]=function(){return(dynCall_viijijii=Module["dynCall_viijijii"]=Module["asm"]["dynCall_viijijii"]).apply(null,arguments)};var dynCall_viijijiii=Module["dynCall_viijijiii"]=function(){return(dynCall_viijijiii=Module["dynCall_viijijiii"]=Module["asm"]["dynCall_viijijiii"]).apply(null,arguments)};var dynCall_vijiji=Module["dynCall_vijiji"]=function(){return(dynCall_vijiji=Module["dynCall_vijiji"]=Module["asm"]["dynCall_vijiji"]).apply(null,arguments)};var dynCall_viijiijiii=Module["dynCall_viijiijiii"]=function(){return(dynCall_viijiijiii=Module["dynCall_viijiijiii"]=Module["asm"]["dynCall_viijiijiii"]).apply(null,arguments)};var dynCall_viiiijiiii=Module["dynCall_viiiijiiii"]=function(){return(dynCall_viiiijiiii=Module["dynCall_viiiijiiii"]=Module["asm"]["dynCall_viiiijiiii"]).apply(null,arguments)};var dynCall_di=Module["dynCall_di"]=function(){return(dynCall_di=Module["dynCall_di"]=Module["asm"]["dynCall_di"]).apply(null,arguments)};var dynCall_jiiiiiiiii=Module["dynCall_jiiiiiiiii"]=function(){return(dynCall_jiiiiiiiii=Module["dynCall_jiiiiiiiii"]=Module["asm"]["dynCall_jiiiiiiiii"]).apply(null,arguments)};var dynCall_jiiiiiiiiii=Module["dynCall_jiiiiiiiiii"]=function(){return(dynCall_jiiiiiiiiii=Module["dynCall_jiiiiiiiiii"]=Module["asm"]["dynCall_jiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiiiijii=Module["dynCall_iiiiijii"]=function(){return(dynCall_iiiiijii=Module["dynCall_iiiiijii"]=Module["asm"]["dynCall_iiiiijii"]).apply(null,arguments)};var dynCall_iiiiidii=Module["dynCall_iiiiidii"]=function(){return(dynCall_iiiiidii=Module["dynCall_iiiiidii"]=Module["asm"]["dynCall_iiiiidii"]).apply(null,arguments)};var dynCall_iiiiifii=Module["dynCall_iiiiifii"]=function(){return(dynCall_iiiiifii=Module["dynCall_iiiiifii"]=Module["asm"]["dynCall_iiiiifii"]).apply(null,arguments)};var dynCall_iiidiii=Module["dynCall_iiidiii"]=function(){return(dynCall_iiidiii=Module["dynCall_iiidiii"]=Module["asm"]["dynCall_iiidiii"]).apply(null,arguments)};var dynCall_viifffiii=Module["dynCall_viifffiii"]=function(){return(dynCall_viifffiii=Module["dynCall_viifffiii"]=Module["asm"]["dynCall_viifffiii"]).apply(null,arguments)};var dynCall_iiiiffiiiji=Module["dynCall_iiiiffiiiji"]=function(){return(dynCall_iiiiffiiiji=Module["dynCall_iiiiffiiiji"]=Module["asm"]["dynCall_iiiiffiiiji"]).apply(null,arguments)};var dynCall_jiiiiiii=Module["dynCall_jiiiiiii"]=function(){return(dynCall_jiiiiiii=Module["dynCall_jiiiiiii"]=Module["asm"]["dynCall_jiiiiiii"]).apply(null,arguments)};var dynCall_iiiiffiiiii=Module["dynCall_iiiiffiiiii"]=function(){return(dynCall_iiiiffiiiii=Module["dynCall_iiiiffiiiii"]=Module["asm"]["dynCall_iiiiffiiiii"]).apply(null,arguments)};var dynCall_diiiidi=Module["dynCall_diiiidi"]=function(){return(dynCall_diiiidi=Module["dynCall_diiiidi"]=Module["asm"]["dynCall_diiiidi"]).apply(null,arguments)};var dynCall_jiiiiji=Module["dynCall_jiiiiji"]=function(){return(dynCall_jiiiiji=Module["dynCall_jiiiiji"]=Module["asm"]["dynCall_jiiiiji"]).apply(null,arguments)};var dynCall_fiiiifi=Module["dynCall_fiiiifi"]=function(){return(dynCall_fiiiifi=Module["dynCall_fiiiifi"]=Module["asm"]["dynCall_fiiiifi"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiidi=Module["dynCall_iiidi"]=function(){return(dynCall_iiidi=Module["dynCall_iiidi"]=Module["asm"]["dynCall_iiidi"]).apply(null,arguments)};var dynCall_iiijjii=Module["dynCall_iiijjii"]=function(){return(dynCall_iiijjii=Module["dynCall_iiijjii"]=Module["asm"]["dynCall_iiijjii"]).apply(null,arguments)};var dynCall_ijiiiiii=Module["dynCall_ijiiiiii"]=function(){return(dynCall_ijiiiiii=Module["dynCall_ijiiiiii"]=Module["asm"]["dynCall_ijiiiiii"]).apply(null,arguments)};var dynCall_ijjiiiiii=Module["dynCall_ijjiiiiii"]=function(){return(dynCall_ijjiiiiii=Module["dynCall_ijjiiiiii"]=Module["asm"]["dynCall_ijjiiiiii"]).apply(null,arguments)};var dynCall_vdi=Module["dynCall_vdi"]=function(){return(dynCall_vdi=Module["dynCall_vdi"]=Module["asm"]["dynCall_vdi"]).apply(null,arguments)};var dynCall_vfi=Module["dynCall_vfi"]=function(){return(dynCall_vfi=Module["dynCall_vfi"]=Module["asm"]["dynCall_vfi"]).apply(null,arguments)};var dynCall_fff=Module["dynCall_fff"]=function(){return(dynCall_fff=Module["dynCall_fff"]=Module["asm"]["dynCall_fff"]).apply(null,arguments)};var dynCall_vif=Module["dynCall_vif"]=function(){return(dynCall_vif=Module["dynCall_vif"]=Module["asm"]["dynCall_vif"]).apply(null,arguments)};var dynCall_viif=Module["dynCall_viif"]=function(){return(dynCall_viif=Module["dynCall_viif"]=Module["asm"]["dynCall_viif"]).apply(null,arguments)};var dynCall_ijj=Module["dynCall_ijj"]=function(){return(dynCall_ijj=Module["dynCall_ijj"]=Module["asm"]["dynCall_ijj"]).apply(null,arguments)};var dynCall_vjji=Module["dynCall_vjji"]=function(){return(dynCall_vjji=Module["dynCall_vjji"]=Module["asm"]["dynCall_vjji"]).apply(null,arguments)};var dynCall_viffff=Module["dynCall_viffff"]=function(){return(dynCall_viffff=Module["dynCall_viffff"]=Module["asm"]["dynCall_viffff"]).apply(null,arguments)};var dynCall_vid=Module["dynCall_vid"]=function(){return(dynCall_vid=Module["dynCall_vid"]=Module["asm"]["dynCall_vid"]).apply(null,arguments)};var dynCall_viiiiif=Module["dynCall_viiiiif"]=function(){return(dynCall_viiiiif=Module["dynCall_viiiiif"]=Module["asm"]["dynCall_viiiiif"]).apply(null,arguments)};var dynCall_viiiif=Module["dynCall_viiiif"]=function(){return(dynCall_viiiif=Module["dynCall_viiiif"]=Module["asm"]["dynCall_viiiif"]).apply(null,arguments)};var dynCall_viiiiiif=Module["dynCall_viiiiiif"]=function(){return(dynCall_viiiiiif=Module["dynCall_viiiiiif"]=Module["asm"]["dynCall_viiiiiif"]).apply(null,arguments)};var dynCall_iiiijiii=Module["dynCall_iiiijiii"]=function(){return(dynCall_iiiijiii=Module["dynCall_iiiijiii"]=Module["asm"]["dynCall_iiiijiii"]).apply(null,arguments)};var dynCall_iiiij=Module["dynCall_iiiij"]=function(){return(dynCall_iiiij=Module["dynCall_iiiij"]=Module["asm"]["dynCall_iiiij"]).apply(null,arguments)};var dynCall_iiif=Module["dynCall_iiif"]=function(){return(dynCall_iiif=Module["dynCall_iiif"]=Module["asm"]["dynCall_iiif"]).apply(null,arguments)};var dynCall_fif=Module["dynCall_fif"]=function(){return(dynCall_fif=Module["dynCall_fif"]=Module["asm"]["dynCall_fif"]).apply(null,arguments)};var dynCall_iiiiiifff=Module["dynCall_iiiiiifff"]=function(){return(dynCall_iiiiiifff=Module["dynCall_iiiiiifff"]=Module["asm"]["dynCall_iiiiiifff"]).apply(null,arguments)};var dynCall_iiiiiifiif=Module["dynCall_iiiiiifiif"]=function(){return(dynCall_iiiiiifiif=Module["dynCall_iiiiiifiif"]=Module["asm"]["dynCall_iiiiiifiif"]).apply(null,arguments)};var dynCall_iiiiiifiii=Module["dynCall_iiiiiifiii"]=function(){return(dynCall_iiiiiifiii=Module["dynCall_iiiiiifiii"]=Module["asm"]["dynCall_iiiiiifiii"]).apply(null,arguments)};var dynCall_iiiiiiifiif=Module["dynCall_iiiiiiifiif"]=function(){return(dynCall_iiiiiiifiif=Module["dynCall_iiiiiiifiif"]=Module["asm"]["dynCall_iiiiiiifiif"]).apply(null,arguments)};var dynCall_fiff=Module["dynCall_fiff"]=function(){return(dynCall_fiff=Module["dynCall_fiff"]=Module["asm"]["dynCall_fiff"]).apply(null,arguments)};var dynCall_fiiiiiifiifif=Module["dynCall_fiiiiiifiifif"]=function(){return(dynCall_fiiiiiifiifif=Module["dynCall_fiiiiiifiifif"]=Module["asm"]["dynCall_fiiiiiifiifif"]).apply(null,arguments)};var dynCall_fiiiiiifiiiif=Module["dynCall_fiiiiiifiiiif"]=function(){return(dynCall_fiiiiiifiiiif=Module["dynCall_fiiiiiifiiiif"]=Module["asm"]["dynCall_fiiiiiifiiiif"]).apply(null,arguments)};var dynCall_vifiiii=Module["dynCall_vifiiii"]=function(){return(dynCall_vifiiii=Module["dynCall_vifiiii"]=Module["asm"]["dynCall_vifiiii"]).apply(null,arguments)};var dynCall_iifiiiijii=Module["dynCall_iifiiiijii"]=function(){return(dynCall_iifiiiijii=Module["dynCall_iifiiiijii"]=Module["asm"]["dynCall_iifiiiijii"]).apply(null,arguments)};var dynCall_vifif=Module["dynCall_vifif"]=function(){return(dynCall_vifif=Module["dynCall_vifif"]=Module["asm"]["dynCall_vifif"]).apply(null,arguments)};var dynCall_vifijii=Module["dynCall_vifijii"]=function(){return(dynCall_vifijii=Module["dynCall_vifijii"]=Module["asm"]["dynCall_vifijii"]).apply(null,arguments)};var dynCall_iiiifffiii=Module["dynCall_iiiifffiii"]=function(){return(dynCall_iiiifffiii=Module["dynCall_iiiifffiii"]=Module["asm"]["dynCall_iiiifffiii"]).apply(null,arguments)};var dynCall_iiiifffffi=Module["dynCall_iiiifffffi"]=function(){return(dynCall_iiiifffffi=Module["dynCall_iiiifffffi"]=Module["asm"]["dynCall_iiiifffffi"]).apply(null,arguments)};var dynCall_viffiiiif=Module["dynCall_viffiiiif"]=function(){return(dynCall_viffiiiif=Module["dynCall_viffiiiif"]=Module["asm"]["dynCall_viffiiiif"]).apply(null,arguments)};var dynCall_viffiifffffiii=Module["dynCall_viffiifffffiii"]=function(){return(dynCall_viffiifffffiii=Module["dynCall_viffiifffffiii"]=Module["asm"]["dynCall_viffiifffffiii"]).apply(null,arguments)};var dynCall_viffffiifffiiiiif=Module["dynCall_viffffiifffiiiiif"]=function(){return(dynCall_viffffiifffiiiiif=Module["dynCall_viffffiifffiiiiif"]=Module["asm"]["dynCall_viffffiifffiiiiif"]).apply(null,arguments)};var dynCall_iiiifffffii=Module["dynCall_iiiifffffii"]=function(){return(dynCall_iiiifffffii=Module["dynCall_iiiifffffii"]=Module["asm"]["dynCall_iiiifffffii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiifii=Module["dynCall_viiiiiiiiiiifii"]=function(){return(dynCall_viiiiiiiiiiifii=Module["dynCall_viiiiiiiiiiifii"]=Module["asm"]["dynCall_viiiiiiiiiiifii"]).apply(null,arguments)};var dynCall_viff=Module["dynCall_viff"]=function(){return(dynCall_viff=Module["dynCall_viff"]=Module["asm"]["dynCall_viff"]).apply(null,arguments)};var dynCall_iiiifiiiii=Module["dynCall_iiiifiiiii"]=function(){return(dynCall_iiiifiiiii=Module["dynCall_iiiifiiiii"]=Module["asm"]["dynCall_iiiifiiiii"]).apply(null,arguments)};var dynCall_iiiiifiiiiif=Module["dynCall_iiiiifiiiiif"]=function(){return(dynCall_iiiiifiiiiif=Module["dynCall_iiiiifiiiiif"]=Module["asm"]["dynCall_iiiiifiiiiif"]).apply(null,arguments)};var dynCall_viiff=Module["dynCall_viiff"]=function(){return(dynCall_viiff=Module["dynCall_viiff"]=Module["asm"]["dynCall_viiff"]).apply(null,arguments)};var dynCall_viifffi=Module["dynCall_viifffi"]=function(){return(dynCall_viifffi=Module["dynCall_viifffi"]=Module["asm"]["dynCall_viifffi"]).apply(null,arguments)};var dynCall_viiifiiiii=Module["dynCall_viiifiiiii"]=function(){return(dynCall_viiifiiiii=Module["dynCall_viiifiiiii"]=Module["asm"]["dynCall_viiifiiiii"]).apply(null,arguments)};var dynCall_viiiifiiiiif=Module["dynCall_viiiifiiiiif"]=function(){return(dynCall_viiiifiiiiif=Module["dynCall_viiiifiiiiif"]=Module["asm"]["dynCall_viiiifiiiiif"]).apply(null,arguments)};var dynCall_iifff=Module["dynCall_iifff"]=function(){return(dynCall_iifff=Module["dynCall_iifff"]=Module["asm"]["dynCall_iifff"]).apply(null,arguments)};var dynCall_viiifiii=Module["dynCall_viiifiii"]=function(){return(dynCall_viiifiii=Module["dynCall_viiifiii"]=Module["asm"]["dynCall_viiifiii"]).apply(null,arguments)};var dynCall_iif=Module["dynCall_iif"]=function(){return(dynCall_iif=Module["dynCall_iif"]=Module["asm"]["dynCall_iif"]).apply(null,arguments)};var dynCall_viij=Module["dynCall_viij"]=function(){return(dynCall_viij=Module["dynCall_viij"]=Module["asm"]["dynCall_viij"]).apply(null,arguments)};var dynCall_viijijj=Module["dynCall_viijijj"]=function(){return(dynCall_viijijj=Module["dynCall_viijijj"]=Module["asm"]["dynCall_viijijj"]).apply(null,arguments)};var dynCall_viijj=Module["dynCall_viijj"]=function(){return(dynCall_viijj=Module["dynCall_viijj"]=Module["asm"]["dynCall_viijj"]).apply(null,arguments)};var dynCall_viiiij=Module["dynCall_viiiij"]=function(){return(dynCall_viiiij=Module["dynCall_viiiij"]=Module["asm"]["dynCall_viiiij"]).apply(null,arguments)};var dynCall_iiijji=Module["dynCall_iiijji"]=function(){return(dynCall_iiijji=Module["dynCall_iiijji"]=Module["asm"]["dynCall_iiijji"]).apply(null,arguments)};var dynCall_ijjiiiii=Module["dynCall_ijjiiiii"]=function(){return(dynCall_ijjiiiii=Module["dynCall_ijjiiiii"]=Module["asm"]["dynCall_ijjiiiii"]).apply(null,arguments)};var dynCall_vidd=Module["dynCall_vidd"]=function(){return(dynCall_vidd=Module["dynCall_vidd"]=Module["asm"]["dynCall_vidd"]).apply(null,arguments)};var dynCall_iiiiiifffiiifiii=Module["dynCall_iiiiiifffiiifiii"]=function(){return(dynCall_iiiiiifffiiifiii=Module["dynCall_iiiiiifffiiifiii"]=Module["asm"]["dynCall_iiiiiifffiiifiii"]).apply(null,arguments)};var dynCall_viid=Module["dynCall_viid"]=function(){return(dynCall_viid=Module["dynCall_viid"]=Module["asm"]["dynCall_viid"]).apply(null,arguments)};var dynCall_viiif=Module["dynCall_viiif"]=function(){return(dynCall_viiif=Module["dynCall_viiif"]=Module["asm"]["dynCall_viiif"]).apply(null,arguments)};var dynCall_iiiiiff=Module["dynCall_iiiiiff"]=function(){return(dynCall_iiiiiff=Module["dynCall_iiiiiff"]=Module["asm"]["dynCall_iiiiiff"]).apply(null,arguments)};var dynCall_iiij=Module["dynCall_iiij"]=function(){return(dynCall_iiij=Module["dynCall_iiij"]=Module["asm"]["dynCall_iiij"]).apply(null,arguments)};var dynCall_vf=Module["dynCall_vf"]=function(){return(dynCall_vf=Module["dynCall_vf"]=Module["asm"]["dynCall_vf"]).apply(null,arguments)};var dynCall_vffff=Module["dynCall_vffff"]=function(){return(dynCall_vffff=Module["dynCall_vffff"]=Module["asm"]["dynCall_vffff"]).apply(null,arguments)};var dynCall_vff=Module["dynCall_vff"]=function(){return(dynCall_vff=Module["dynCall_vff"]=Module["asm"]["dynCall_vff"]).apply(null,arguments)};var dynCall_vifff=Module["dynCall_vifff"]=function(){return(dynCall_vifff=Module["dynCall_vifff"]=Module["asm"]["dynCall_vifff"]).apply(null,arguments)};var dynCall_viifff=Module["dynCall_viifff"]=function(){return(dynCall_viifff=Module["dynCall_viifff"]=Module["asm"]["dynCall_viifff"]).apply(null,arguments)};var dynCall_vij=Module["dynCall_vij"]=function(){return(dynCall_vij=Module["dynCall_vij"]=Module["asm"]["dynCall_vij"]).apply(null,arguments)};var dynCall_ij=Module["dynCall_ij"]=function(){return(dynCall_ij=Module["dynCall_ij"]=Module["asm"]["dynCall_ij"]).apply(null,arguments)};var dynCall_f=Module["dynCall_f"]=function(){return(dynCall_f=Module["dynCall_f"]=Module["asm"]["dynCall_f"]).apply(null,arguments)};var dynCall_vfff=Module["dynCall_vfff"]=function(){return(dynCall_vfff=Module["dynCall_vfff"]=Module["asm"]["dynCall_vfff"]).apply(null,arguments)};var dynCall_vffffffi=Module["dynCall_vffffffi"]=function(){return(dynCall_vffffffi=Module["dynCall_vffffffi"]=Module["asm"]["dynCall_vffffffi"]).apply(null,arguments)};var dynCall_ff=Module["dynCall_ff"]=function(){return(dynCall_ff=Module["dynCall_ff"]=Module["asm"]["dynCall_ff"]).apply(null,arguments)};var dynCall_iiiiiiffiiiiiiiiiffffiii=Module["dynCall_iiiiiiffiiiiiiiiiffffiii"]=function(){return(dynCall_iiiiiiffiiiiiiiiiffffiii=Module["dynCall_iiiiiiffiiiiiiiiiffffiii"]=Module["asm"]["dynCall_iiiiiiffiiiiiiiiiffffiii"]).apply(null,arguments)};var dynCall_viififi=Module["dynCall_viififi"]=function(){return(dynCall_viififi=Module["dynCall_viififi"]=Module["asm"]["dynCall_viififi"]).apply(null,arguments)};var dynCall_if=Module["dynCall_if"]=function(){return(dynCall_if=Module["dynCall_if"]=Module["asm"]["dynCall_if"]).apply(null,arguments)};var dynCall_viiffiiiiiiiii=Module["dynCall_viiffiiiiiiiii"]=function(){return(dynCall_viiffiiiiiiiii=Module["dynCall_viiffiiiiiiiii"]=Module["asm"]["dynCall_viiffiiiiiiiii"]).apply(null,arguments)};var dynCall_viiffiiiiii=Module["dynCall_viiffiiiiii"]=function(){return(dynCall_viiffiiiiii=Module["dynCall_viiffiiiiii"]=Module["asm"]["dynCall_viiffiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiijiii=Module["dynCall_viiiiiiiijiii"]=function(){return(dynCall_viiiiiiiijiii=Module["dynCall_viiiiiiiijiii"]=Module["asm"]["dynCall_viiiiiiiijiii"]).apply(null,arguments)};function invoke_ii(index,a1){var sp=stackSave();try{return dynCall_ii(index,a1)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_v(index){var sp=stackSave();try{dynCall_v(index)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vii(index,a1,a2){var sp=stackSave();try{dynCall_vii(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iii(index,a1,a2){var sp=stackSave();try{return dynCall_iii(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vi(index,a1){var sp=stackSave();try{dynCall_vi(index,a1)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiii(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_iiiii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiii(index,a1,a2,a3){var sp=stackSave();try{return dynCall_iiii(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_iiiiii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viii(index,a1,a2,a3){var sp=stackSave();try{dynCall_viii(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_i(index){var sp=stackSave();try{return dynCall_i(index)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiii(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viiii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{return dynCall_iiiiiii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiiii(index,a1,a2,a3,a4,a5,a6,a7){var sp=stackSave();try{return dynCall_iiiiiiii(index,a1,a2,a3,a4,a5,a6,a7)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10){var sp=stackSave();try{return dynCall_iiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_fiii(index,a1,a2,a3){var sp=stackSave();try{return dynCall_fiii(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_diii(index,a1,a2,a3){var sp=stackSave();try{return dynCall_diii(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiiii(index,a1,a2,a3,a4,a5,a6,a7){var sp=stackSave();try{dynCall_viiiiiii(index,a1,a2,a3,a4,a5,a6,a7)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11){var sp=stackSave();try{return dynCall_iiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10){var sp=stackSave();try{dynCall_viiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14,a15){var sp=stackSave();try{dynCall_viiiiiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14,a15)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{dynCall_viiiiii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_viiiii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8){var sp=stackSave();try{dynCall_viiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8){var sp=stackSave();try{return dynCall_iiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_ddiii(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_ddiii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiifii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_iiifii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viifi(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viifi(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viffi(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viffi(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vidi(index,a1,a2,a3){var sp=stackSave();try{dynCall_vidi(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viidi(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viidi(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_dii(index,a1,a2){var sp=stackSave();try{return dynCall_dii(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9){var sp=stackSave();try{dynCall_viiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiffi(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_viiffi(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vifi(index,a1,a2,a3){var sp=stackSave();try{dynCall_vifi(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_fii(index,a1,a2){var sp=stackSave();try{return dynCall_fii(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iifi(index,a1,a2,a3){var sp=stackSave();try{return dynCall_iifi(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_fffi(index,a1,a2,a3){var sp=stackSave();try{return dynCall_fffi(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_dddi(index,a1,a2,a3){var sp=stackSave();try{return dynCall_dddi(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iidi(index,a1,a2,a3){var sp=stackSave();try{return dynCall_iidi(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_diiii(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_diiii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9){var sp=stackSave();try{return dynCall_iiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_idi(index,a1,a2){var sp=stackSave();try{return dynCall_idi(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viifii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_viifii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_fi(index,a1){var sp=stackSave();try{return dynCall_fi(index,a1)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiifi(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_iiifi(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiifi(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{dynCall_viiiifi(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiifii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{return dynCall_iiiifii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_ifi(index,a1,a2){var sp=stackSave();try{return dynCall_ifi(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12){var sp=stackSave();try{dynCall_viiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vifii(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_vifii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiidii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{return dynCall_iiiidii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiifi(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_viiifi(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13){var sp=stackSave();try{dynCall_viiiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiiiiifi(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10){var sp=stackSave();try{return dynCall_iiiiiiiiifi(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vidd(index,a1,a2,a3){var sp=stackSave();try{dynCall_vidd(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iij(index,a1,a2,a3){var sp=stackSave();try{return dynCall_iij(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiijiii(index,a1,a2,a3,a4,a5,a6,a7){var sp=stackSave();try{return dynCall_iiijiii(index,a1,a2,a3,a4,a5,a6,a7)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jii(index,a1,a2){var sp=stackSave();try{return dynCall_jii(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jiiii(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_jiiii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_ji(index,a1){var sp=stackSave();try{return dynCall_ji(index,a1)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viijii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{dynCall_viijii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiij(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{return dynCall_iiiiij(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_j(index){var sp=stackSave();try{return dynCall_j(index)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iijiii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{return dynCall_iijiii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jiii(index,a1,a2,a3){var sp=stackSave();try{return dynCall_jiii(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vijii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_vijii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viji(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viji(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiji(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_viiji(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiijii(index,a1,a2,a3,a4,a5,a6,a7){var sp=stackSave();try{return dynCall_iiiijii(index,a1,a2,a3,a4,a5,a6,a7)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jijii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_jijii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jjji(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_jjji(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viijiiijiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13){var sp=stackSave();try{dynCall_viijiiijiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_ijji(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_ijji(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jji(index,a1,a2,a3){var sp=stackSave();try{return dynCall_jji(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiji(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{dynCall_viiiji(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jdi(index,a1,a2){var sp=stackSave();try{return dynCall_jdi(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vijjji(index,a1,a2,a3,a4,a5,a6,a7,a8){var sp=stackSave();try{dynCall_vijjji(index,a1,a2,a3,a4,a5,a6,a7,a8)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiji(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_iiji(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jjii(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_jjii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_dji(index,a1,a2,a3){var sp=stackSave();try{return dynCall_dji(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iji(index,a1,a2,a3){var sp=stackSave();try{return dynCall_iji(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jiidi(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_jiidi(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iijii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_iijii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiijjii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9){var sp=stackSave();try{return dynCall_iiiijjii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iijjiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11){var sp=stackSave();try{return dynCall_iijjiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iijiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9){var sp=stackSave();try{return dynCall_iijiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiiiiiji(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11){var sp=stackSave();try{return dynCall_iiiiiiiiiji(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vji(index,a1,a2,a3){var sp=stackSave();try{dynCall_vji(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jiji(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_jiji(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iijji(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{return dynCall_iijji(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jidi(index,a1,a2,a3){var sp=stackSave();try{return dynCall_jidi(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vijiii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{dynCall_vijiii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vjjjiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10){var sp=stackSave();try{dynCall_vjjjiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vjiiiii(index,a1,a2,a3,a4,a5,a6,a7){var sp=stackSave();try{dynCall_vjiiiii(index,a1,a2,a3,a4,a5,a6,a7)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10){var sp=stackSave();try{return dynCall_jiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jiiiii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_jiiiii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}Module["ccall"]=ccall;Module["cwrap"]=cwrap;Module["stackTrace"]=stackTrace;Module["addRunDependency"]=addRunDependency;Module["removeRunDependency"]=removeRunDependency;Module["FS_createPath"]=FS.createPath;Module["FS_createDataFile"]=FS.createDataFile;Module["stackTrace"]=stackTrace;var calledRun;function ExitStatus(status){this.name="ExitStatus";this.message="Program terminated with exit("+status+")";this.status=status}var calledMain=false;dependenciesFulfilled=function runCaller(){if(!calledRun)run();if(!calledRun)dependenciesFulfilled=runCaller};function callMain(args){var entryFunction=Module["_main"];args=args||[];var argc=args.length+1;var argv=stackAlloc((argc+1)*4);HEAP32[argv>>2]=allocateUTF8OnStack(thisProgram);for(var i=1;i>2)+i]=allocateUTF8OnStack(args[i-1])}HEAP32[(argv>>2)+argc]=0;try{var ret=entryFunction(argc,argv);exit(ret,true);return ret}catch(e){return handleException(e)}finally{calledMain=true}}function run(args){args=args||arguments_;if(runDependencies>0){return}preRun();if(runDependencies>0){return}function doRun(){if(calledRun)return;calledRun=true;Module["calledRun"]=true;if(ABORT)return;initRuntime();preMain();readyPromiseResolve(Module);if(Module["onRuntimeInitialized"])Module["onRuntimeInitialized"]();if(shouldRunNow)callMain(args);postRun()}if(Module["setStatus"]){Module["setStatus"]("Running...");setTimeout(function(){setTimeout(function(){Module["setStatus"]("")},1);doRun()},1)}else{doRun()}}Module["run"]=run;function exit(status,implicit){EXITSTATUS=status;procExit(status)}function procExit(code){EXITSTATUS=code;if(!keepRuntimeAlive()){if(Module["onExit"])Module["onExit"](code);ABORT=true}quit_(code,new ExitStatus(code))}if(Module["preInit"]){if(typeof Module["preInit"]=="function")Module["preInit"]=[Module["preInit"]];while(Module["preInit"].length>0){Module["preInit"].pop()()}}var shouldRunNow=true;if(Module["noInitialRun"])shouldRunNow=false;run();
-
-
- return unityFramework.ready
-}
-);
-})();
-if (typeof exports === 'object' && typeof module === 'object')
- module.exports = unityFramework;
-else if (typeof define === 'function' && define['amd'])
- define([], function() { return unityFramework; });
-else if (typeof exports === 'object')
- exports["unityFramework"] = unityFramework;
diff --git a/spaces/DiffusionArtco/AnimeTop50/app.py b/spaces/DiffusionArtco/AnimeTop50/app.py
deleted file mode 100644
index 14a95e466bb8504f17080cb514f34114e9606f3c..0000000000000000000000000000000000000000
--- a/spaces/DiffusionArtco/AnimeTop50/app.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import gradio as gr
-import requests
-from PIL import Image
-from io import BytesIO
-import base64
-
-api_url = "https://5cb20b40-572c-426f-9466-995256f9b6eb.id.repl.co/generate_image"
-
-def generate_image(model="Abyss OrangeMix", prompt="", seed=0, negative_prompt="", sampler="k_dpmpp_2s_a", steps=50):
- data = "?model=" + model + "&prompt=" + prompt + "&seed=" + str(seed) + "&negative_prompt=" + negative_prompt + "&sampler=" + sampler + "&steps=" + str(steps)
- response = requests.post(api_url + data, timeout=400)
- if response.status_code == 200:
- img_base64 = response.json()["url"]
- img_bytes = base64.b64decode(img_base64)
- img = Image.open(BytesIO(img_bytes))
- return img
- else:
- return None
-
-inputs = [
- gr.inputs.Dropdown(['Abyss OrangeMix', 'AbyssOrangeMix-AfterDark','Anime Pencil Diffusion', 'Anygen', 'Anything Diffusion', 'Anything v3', 'anything_v4_inpainting', 'Arcane Diffusion', 'BPModel', 'Counterfeit', 'Cyberpunk Anime Diffusion', 'CyriousMix', 'DGSpitzer Art Diffusion', 'Dreamshaper', 'DucHaiten Classic Anime', 'Eimis Anime Diffusion', 'Ghibli Diffusion', 'GuoFeng', 'Hentai Diffusion', 'Kenshi', 'Midjourney Diffusion', 'NeverEnding Dream', 'Openniji', 'Pastel Mix', 'Protogen Anime', 'Rev Animated'], label="Model", default="Abyss OrangeMix"),
- gr.inputs.Textbox(label="Prompt"),
- gr.inputs.Number(label="Seed", default=0),
- gr.inputs.Textbox(label="Negative Prompt", default=""),
- gr.inputs.Dropdown(["k_lms", "k_heun", "k_euler", "k_euler_a", "k_dpm_2", "k_dpm_2_a", "DDIM", "k_dpm_fast", "k_dpm_adaptive", "k_dpmpp_2m", "k_dpmpp_2s_a", "k_dpmpp_sde"], label="Sampler", default="k_dpmpp_2s_a"),
- gr.inputs.Number(label="Steps", default=50)
-]
-
-outputs = gr.outputs.Image(label="Generated Image", type="pil")
-
-interface = gr.Interface(generate_image, inputs, outputs, title="",
- description="
",
- examples=[])
-
-interface.launch()
-
-
diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/resnext.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/resnext.py
deleted file mode 100644
index cdbb7461a6c8eb126717967cdca5d5ce392aecea..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/resnext.py
+++ /dev/null
@@ -1,182 +0,0 @@
-import os
-import sys
-import torch
-import torch.nn as nn
-import math
-try:
- from lib.nn import SynchronizedBatchNorm2d
-except ImportError:
- from torch.nn import BatchNorm2d as SynchronizedBatchNorm2d
-try:
- from urllib import urlretrieve
-except ImportError:
- from urllib.request import urlretrieve
-
-
-__all__ = ['ResNeXt', 'resnext101'] # support resnext 101
-
-
-model_urls = {
- #'resnext50': 'http://sceneparsing.csail.mit.edu/model/pretrained_resnet/resnext50-imagenet.pth',
- 'resnext101': 'http://sceneparsing.csail.mit.edu/model/pretrained_resnet/resnext101-imagenet.pth'
-}
-
-
-def conv3x3(in_planes, out_planes, stride=1):
- "3x3 convolution with padding"
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
- padding=1, bias=False)
-
-
-class GroupBottleneck(nn.Module):
- expansion = 2
-
- def __init__(self, inplanes, planes, stride=1, groups=1, downsample=None):
- super(GroupBottleneck, self).__init__()
- self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
- self.bn1 = SynchronizedBatchNorm2d(planes)
- self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
- padding=1, groups=groups, bias=False)
- self.bn2 = SynchronizedBatchNorm2d(planes)
- self.conv3 = nn.Conv2d(planes, planes * 2, kernel_size=1, bias=False)
- self.bn3 = SynchronizedBatchNorm2d(planes * 2)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
- out = self.relu(out)
-
- out = self.conv3(out)
- out = self.bn3(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class ResNeXt(nn.Module):
-
- def __init__(self, block, layers, groups=32, num_classes=1000):
- self.inplanes = 128
- super(ResNeXt, self).__init__()
- self.conv1 = conv3x3(3, 64, stride=2)
- self.bn1 = SynchronizedBatchNorm2d(64)
- self.relu1 = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(64, 64)
- self.bn2 = SynchronizedBatchNorm2d(64)
- self.relu2 = nn.ReLU(inplace=True)
- self.conv3 = conv3x3(64, 128)
- self.bn3 = SynchronizedBatchNorm2d(128)
- self.relu3 = nn.ReLU(inplace=True)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
-
- self.layer1 = self._make_layer(block, 128, layers[0], groups=groups)
- self.layer2 = self._make_layer(block, 256, layers[1], stride=2, groups=groups)
- self.layer3 = self._make_layer(block, 512, layers[2], stride=2, groups=groups)
- self.layer4 = self._make_layer(block, 1024, layers[3], stride=2, groups=groups)
- self.avgpool = nn.AvgPool2d(7, stride=1)
- self.fc = nn.Linear(1024 * block.expansion, num_classes)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels // m.groups
- m.weight.data.normal_(0, math.sqrt(2. / n))
- elif isinstance(m, SynchronizedBatchNorm2d):
- m.weight.data.fill_(1)
- m.bias.data.zero_()
-
- def _make_layer(self, block, planes, blocks, stride=1, groups=1):
- downsample = None
- if stride != 1 or self.inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.Conv2d(self.inplanes, planes * block.expansion,
- kernel_size=1, stride=stride, bias=False),
- SynchronizedBatchNorm2d(planes * block.expansion),
- )
-
- layers = []
- layers.append(block(self.inplanes, planes, stride, groups, downsample))
- self.inplanes = planes * block.expansion
- for i in range(1, blocks):
- layers.append(block(self.inplanes, planes, groups=groups))
-
- return nn.Sequential(*layers)
-
- def forward(self, x):
- x = self.relu1(self.bn1(self.conv1(x)))
- x = self.relu2(self.bn2(self.conv2(x)))
- x = self.relu3(self.bn3(self.conv3(x)))
- x = self.maxpool(x)
-
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
- x = self.layer4(x)
-
- x = self.avgpool(x)
- x = x.view(x.size(0), -1)
- x = self.fc(x)
-
- return x
-
-
-'''
-def resnext50(pretrained=False, **kwargs):
- """Constructs a ResNet-50 model.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on Places
- """
- model = ResNeXt(GroupBottleneck, [3, 4, 6, 3], **kwargs)
- if pretrained:
- model.load_state_dict(load_url(model_urls['resnext50']), strict=False)
- return model
-'''
-
-
-def resnext101(pretrained=False, **kwargs):
- """Constructs a ResNet-101 model.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on Places
- """
- model = ResNeXt(GroupBottleneck, [3, 4, 23, 3], **kwargs)
- if pretrained:
- model.load_state_dict(load_url(model_urls['resnext101']), strict=False)
- return model
-
-
-# def resnext152(pretrained=False, **kwargs):
-# """Constructs a ResNeXt-152 model.
-#
-# Args:
-# pretrained (bool): If True, returns a model pre-trained on Places
-# """
-# model = ResNeXt(GroupBottleneck, [3, 8, 36, 3], **kwargs)
-# if pretrained:
-# model.load_state_dict(load_url(model_urls['resnext152']))
-# return model
-
-
-def load_url(url, model_dir='./pretrained', map_location=None):
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- filename = url.split('/')[-1]
- cached_file = os.path.join(model_dir, filename)
- if not os.path.exists(cached_file):
- sys.stderr.write('Downloading: "{}" to {}\n'.format(url, cached_file))
- urlretrieve(url, cached_file)
- return torch.load(cached_file, map_location=map_location)
diff --git a/spaces/Dorado607/ChuanhuChatGPT/modules/models/configuration_moss.py b/spaces/Dorado607/ChuanhuChatGPT/modules/models/configuration_moss.py
deleted file mode 100644
index 9bad4396ecea6578c1628732d0ef077d8964d45d..0000000000000000000000000000000000000000
--- a/spaces/Dorado607/ChuanhuChatGPT/modules/models/configuration_moss.py
+++ /dev/null
@@ -1,118 +0,0 @@
-""" Moss model configuration"""
-
-from transformers.utils import logging
-from transformers.configuration_utils import PretrainedConfig
-
-
-logger = logging.get_logger(__name__)
-
-
-class MossConfig(PretrainedConfig):
- r"""
- This is the configuration class to store the configuration of a [`MossModel`]. It is used to instantiate a
- Moss model according to the specified arguments, defining the model architecture. Instantiating a configuration
- with the defaults will yield a similar configuration to that of the Moss
- [fnlp/moss-moon-003-base](https://huggingface.co/fnlp/moss-moon-003-base) architecture. Configuration objects
- inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from
- [`PretrainedConfig`] for more information.
-
- Args:
- vocab_size (`int`, *optional*, defaults to 107008):
- Vocabulary size of the Moss model. Defines the number of different tokens that can be represented by the
- `inputs_ids` passed when calling [`MossModel`].
- n_positions (`int`, *optional*, defaults to 2048):
- The maximum sequence length that this model might ever be used with. Typically set this to something large
- just in case (e.g., 512 or 1024 or 2048).
- n_embd (`int`, *optional*, defaults to 4096):
- Dimensionality of the embeddings and hidden states.
- n_layer (`int`, *optional*, defaults to 28):
- Number of hidden layers in the Transformer encoder.
- n_head (`int`, *optional*, defaults to 16):
- Number of attention heads for each attention layer in the Transformer encoder.
- rotary_dim (`int`, *optional*, defaults to 64):
- Number of dimensions in the embedding that Rotary Position Embedding is applied to.
- n_inner (`int`, *optional*, defaults to None):
- Dimensionality of the inner feed-forward layers. `None` will set it to 4 times n_embd
- activation_function (`str`, *optional*, defaults to `"gelu_new"`):
- Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`.
- resid_pdrop (`float`, *optional*, defaults to 0.1):
- The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
- embd_pdrop (`int`, *optional*, defaults to 0.1):
- The dropout ratio for the embeddings.
- attn_pdrop (`float`, *optional*, defaults to 0.1):
- The dropout ratio for the attention.
- layer_norm_epsilon (`float`, *optional*, defaults to 1e-5):
- The epsilon to use in the layer normalization layers.
- initializer_range (`float`, *optional*, defaults to 0.02):
- The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
- use_cache (`bool`, *optional*, defaults to `True`):
- Whether or not the model should return the last key/values attentions (not used by all models).
-
- Example:
-
- ```python
- >>> from modeling_moss import MossModel
- >>> from configuration_moss import MossConfig
-
- >>> # Initializing a moss-moon-003-base configuration
- >>> configuration = MossConfig()
-
- >>> # Initializing a model (with random weights) from the configuration
- >>> model = MossModel(configuration)
-
- >>> # Accessing the model configuration
- >>> configuration = model.config
- ```"""
-
- model_type = "moss"
- attribute_map = {
- "max_position_embeddings": "n_positions",
- "hidden_size": "n_embd",
- "num_attention_heads": "n_head",
- "num_hidden_layers": "n_layer",
- }
-
- def __init__(
- self,
- vocab_size=107008,
- n_positions=2048,
- n_ctx=2048,
- n_embd=4096,
- n_layer=28,
- n_head=16,
- rotary_dim=64,
- n_inner=None,
- activation_function="gelu_new",
- resid_pdrop=0.0,
- embd_pdrop=0.0,
- attn_pdrop=0.0,
- layer_norm_epsilon=1e-5,
- initializer_range=0.02,
- use_cache=True,
- bos_token_id=106028,
- eos_token_id=106068,
- tie_word_embeddings=False,
- **kwargs,
- ):
- self.vocab_size = vocab_size
- self.n_ctx = n_ctx
- self.n_positions = n_positions
- self.n_embd = n_embd
- self.n_layer = n_layer
- self.n_head = n_head
- self.n_inner = n_inner
- self.rotary_dim = rotary_dim
- self.activation_function = activation_function
- self.resid_pdrop = resid_pdrop
- self.embd_pdrop = embd_pdrop
- self.attn_pdrop = attn_pdrop
- self.layer_norm_epsilon = layer_norm_epsilon
- self.initializer_range = initializer_range
- self.use_cache = use_cache
-
- self.bos_token_id = bos_token_id
- self.eos_token_id = eos_token_id
-
- super().__init__(
- bos_token_id=bos_token_id, eos_token_id=eos_token_id, tie_word_embeddings=tie_word_embeddings, **kwargs
- )
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/bg_white.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/bg_white.py
deleted file mode 100644
index 9dc181fecbd30b0fcd08cdea87930bd09f4c51fc..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/stylegan_human/bg_white.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-import os
-import click
-import cv2
-import numpy as np
-
-
-def bg_white(seg, raw, blur_level=3, gaussian=81):
- seg = cv2.blur(seg, (blur_level, blur_level))
-
- empty = np.ones_like(seg)
- seg_bg = (empty - seg) * 255
- seg_bg = cv2.GaussianBlur(seg_bg, (gaussian, gaussian), 0)
-
- background_mask = cv2.cvtColor(
- 255 - cv2.cvtColor(seg, cv2.COLOR_BGR2GRAY), cv2.COLOR_GRAY2BGR)
- masked_fg = (raw * (1 / 255)) * (seg * (1 / 255))
- masked_bg = (seg_bg * (1 / 255)) * (background_mask * (1 / 255))
-
- frame = np.uint8(cv2.add(masked_bg, masked_fg)*255)
-
- return frame
-
-
-"""
-To turn background into white.
-
-Examples:
-
-\b
-python bg_white.py --raw_img_dir=./SHHQ-1.0/no_segment/ --raw_seg_dir=./SHHQ-1.0/segments/ \\
- --outdir=./SHHQ-1.0/bg_white/
-"""
-
-
-@click.command()
-@click.pass_context
-@click.option('--raw_img_dir', default="./SHHQ-1.0/no_segment/", help='folder of raw image', required=True)
-@click.option('--raw_seg_dir', default='./SHHQ-1.0/segments/', help='folder of segmentation masks', required=True)
-@click.option('--outdir', help='Where to save the output images', default="./SHHQ-1.0/bg_white/", type=str, required=True, metavar='DIR')
-def main(
- ctx: click.Context,
- raw_img_dir: str,
- raw_seg_dir: str,
- outdir: str):
- os.makedirs(outdir, exist_ok=True)
- files = os.listdir(raw_img_dir)
- for file in files:
- print(file)
- raw = cv2.imread(os.path.join(raw_img_dir, file))
- seg = cv2.imread(os.path.join(raw_seg_dir, file))
- assert raw is not None
- assert seg is not None
- white_frame = bg_white(seg, raw)
- cv2.imwrite(os.path.join(outdir, file), white_frame)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/EPFL-VILAB/MultiMAE/utils/transforms.py b/spaces/EPFL-VILAB/MultiMAE/utils/transforms.py
deleted file mode 100644
index 4a4c651e3b537396fe85143809c09d00984c244b..0000000000000000000000000000000000000000
--- a/spaces/EPFL-VILAB/MultiMAE/utils/transforms.py
+++ /dev/null
@@ -1,163 +0,0 @@
-# --------------------------------------------------------
-# Based on timm and MAE-priv code bases
-# https://github.com/rwightman/pytorch-image-models/tree/master/timm
-# https://github.com/BUPT-PRIV/MAE-priv
-# --------------------------------------------------------
-
-import math
-import random
-import warnings
-
-import numpy as np
-import torch
-import torchvision.transforms.functional as F
-from PIL import Image
-
-
-class ToNumpy:
-
- def __call__(self, pil_img):
- np_img = np.array(pil_img, dtype=np.uint8)
- if np_img.ndim < 3:
- np_img = np.expand_dims(np_img, axis=-1)
- np_img = np.rollaxis(np_img, 2) # HWC to CHW
- return np_img
-
-
-class ToTensor:
-
- def __init__(self, dtype=torch.float32):
- self.dtype = dtype
-
- def __call__(self, pil_img):
- np_img = np.array(pil_img, dtype=np.uint8)
- if np_img.ndim < 3:
- np_img = np.expand_dims(np_img, axis=-1)
- np_img = np.rollaxis(np_img, 2) # HWC to CHW
- return torch.from_numpy(np_img).to(dtype=self.dtype)
-
-
-_pil_interpolation_to_str = {
- Image.NEAREST: 'PIL.Image.NEAREST',
- Image.BILINEAR: 'PIL.Image.BILINEAR',
- Image.BICUBIC: 'PIL.Image.BICUBIC',
- Image.LANCZOS: 'PIL.Image.LANCZOS',
- Image.HAMMING: 'PIL.Image.HAMMING',
- Image.BOX: 'PIL.Image.BOX',
-}
-
-
-def _pil_interp(method):
- if method == 'bicubic':
- return Image.BICUBIC
- elif method == 'lanczos':
- return Image.LANCZOS
- elif method == 'hamming':
- return Image.HAMMING
- else:
- # default bilinear, do we want to allow nearest?
- return Image.BILINEAR
-
-
-_RANDOM_INTERPOLATION = (Image.BILINEAR, Image.BICUBIC)
-
-
-class RandomResizedCropAndInterpolation:
- """Crop the given PIL Image to random size and aspect ratio with random interpolation.
-
- A crop of random size (default: of 0.08 to 1.0) of the original size and a random
- aspect ratio (default: of 3/4 to 4/3) of the original aspect ratio is made. This crop
- is finally resized to given size.
- This is popularly used to train the Inception networks.
-
- Args:
- size: expected output size of each edge
- scale: range of size of the origin size cropped
- ratio: range of aspect ratio of the origin aspect ratio cropped
- interpolation: Default: PIL.Image.BILINEAR
- """
-
- def __init__(self, size, scale=(0.08, 1.0), ratio=(3. / 4., 4. / 3.),
- interpolation='bilinear'):
- if isinstance(size, (list, tuple)):
- self.size = tuple(size)
- else:
- self.size = (size, size)
- if (scale[0] > scale[1]) or (ratio[0] > ratio[1]):
- warnings.warn("range should be of kind (min, max)")
-
- if interpolation == 'random':
- self.interpolation = _RANDOM_INTERPOLATION
- else:
- self.interpolation = _pil_interp(interpolation)
- self.scale = scale
- self.ratio = ratio
-
- @staticmethod
- def get_params(img, scale, ratio):
- """Get parameters for ``crop`` for a random sized crop.
-
- Args:
- img (PIL Image): Image to be cropped.
- scale (tuple): range of size of the origin size cropped
- ratio (tuple): range of aspect ratio of the origin aspect ratio cropped
-
- Returns:
- tuple: params (i, j, h, w) to be passed to ``crop`` for a random
- sized crop.
- """
- area = img.size[0] * img.size[1]
-
- for attempt in range(10):
- target_area = random.uniform(*scale) * area
- log_ratio = (math.log(ratio[0]), math.log(ratio[1]))
- aspect_ratio = math.exp(random.uniform(*log_ratio))
-
- w = int(round(math.sqrt(target_area * aspect_ratio)))
- h = int(round(math.sqrt(target_area / aspect_ratio)))
-
- if w <= img.size[0] and h <= img.size[1]:
- i = random.randint(0, img.size[1] - h)
- j = random.randint(0, img.size[0] - w)
- return i, j, h, w
-
- # Fallback to central crop
- in_ratio = img.size[0] / img.size[1]
- if in_ratio < min(ratio):
- w = img.size[0]
- h = int(round(w / min(ratio)))
- elif in_ratio > max(ratio):
- h = img.size[1]
- w = int(round(h * max(ratio)))
- else: # whole image
- w = img.size[0]
- h = img.size[1]
- i = (img.size[1] - h) // 2
- j = (img.size[0] - w) // 2
- return i, j, h, w
-
- def __call__(self, img):
- """
- Args:
- img (PIL Image): Image to be cropped and resized.
-
- Returns:
- PIL Image: Randomly cropped and resized image.
- """
- i, j, h, w = self.get_params(img, self.scale, self.ratio)
- if isinstance(self.interpolation, (tuple, list)):
- interpolation = random.choice(self.interpolation)
- else:
- interpolation = self.interpolation
- return F.resized_crop(img, i, j, h, w, self.size, interpolation)
-
- def __repr__(self):
- if isinstance(self.interpolation, (tuple, list)):
- interpolate_str = ' '.join([_pil_interpolation_to_str[x] for x in self.interpolation])
- else:
- interpolate_str = _pil_interpolation_to_str[self.interpolation]
- format_string = self.__class__.__name__ + '(size={0}'.format(self.size)
- format_string += ', scale={0}'.format(tuple(round(s, 4) for s in self.scale))
- format_string += ', ratio={0}'.format(tuple(round(r, 4) for r in self.ratio))
- format_string += ', interpolation={0})'.format(interpolate_str)
- return format_string
diff --git a/spaces/Edisonymy/buy-or-rent/src/config.py b/spaces/Edisonymy/buy-or-rent/src/config.py
deleted file mode 100644
index 40842ebe42a8674a0721bdd7576901b9e2384d3b..0000000000000000000000000000000000000000
--- a/spaces/Edisonymy/buy-or-rent/src/config.py
+++ /dev/null
@@ -1,23 +0,0 @@
-n_samples = 1000
-n_bins = 30
-button_css = """
-
- """
diff --git a/spaces/Edisonymy/buy-or-rent/src/utils/finance.py b/spaces/Edisonymy/buy-or-rent/src/utils/finance.py
deleted file mode 100644
index e819ca881e6503b46be7a8b85b69fce0a6d6d4ee..0000000000000000000000000000000000000000
--- a/spaces/Edisonymy/buy-or-rent/src/utils/finance.py
+++ /dev/null
@@ -1,34 +0,0 @@
-def get_stamp_duty_next_home(HOUSE_PRICE):
- if HOUSE_PRICE <=250000:
- return 0
- elif HOUSE_PRICE <=925000:
- return (HOUSE_PRICE-250000) * 0.05
- elif HOUSE_PRICE <=1500000:
- return (HOUSE_PRICE-925000) * 0.10 + (925000-250000) * 0.05
- else:
- return (HOUSE_PRICE-1500000) * 0.12 + (925000-250000) * 0.05 + (1500000-925000) * 0.10
-
-
-def annuity_pv(payment, discount_rate, n_periods, growth_rate):
- # implements present value of annuity formula
- pv = payment * (1- (1+growth_rate)**n_periods*(1+discount_rate)**(-1*n_periods)) / (discount_rate-growth_rate)
- return pv
-
-
-def annuity_fv(payment, discount_rate, n_periods, growth_rate, adjust_for_inflation = 0):
- # implements future value of annuity formula
- fv = payment * ((1+discount_rate)**n_periods - (1+growth_rate)**n_periods) / (discount_rate-growth_rate)
- return fv / float(1+adjust_for_inflation)**(n_periods)
-
-
-def annuity_payment(pv, discount_rate, n_periods, growth_rate):
- # get payment per period for an annuity
- return pv* (discount_rate - growth_rate) / (1- (1+growth_rate)**n_periods * (1+discount_rate)**(-1*n_periods))
-
-
-def pv_future_payment(payment, discount_rate, n_periods):
- return payment/(1+discount_rate)**(n_periods)
-
-
-def fv_present_payment(payment, discount_rate, n_periods, adjust_for_inflation = 0):
- return payment*(1+discount_rate)**(n_periods) / float(1+adjust_for_inflation)**(n_periods)
diff --git a/spaces/Edward-Ji/essentials-of-microeconomics/.github/ISSUE_TEMPLATE/bug_report.md b/spaces/Edward-Ji/essentials-of-microeconomics/.github/ISSUE_TEMPLATE/bug_report.md
deleted file mode 100644
index dd84ea7824f11be1eeda22377549cbc1aec7f980..0000000000000000000000000000000000000000
--- a/spaces/Edward-Ji/essentials-of-microeconomics/.github/ISSUE_TEMPLATE/bug_report.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-name: Bug report
-about: Create a report to help us improve
-title: ''
-labels: ''
-assignees: ''
-
----
-
-**Describe the bug**
-A clear and concise description of what the bug is.
-
-**To Reproduce**
-Steps to reproduce the behavior:
-1. Go to '...'
-2. Click on '....'
-3. Scroll down to '....'
-4. See error
-
-**Expected behavior**
-A clear and concise description of what you expected to happen.
-
-**Screenshots**
-If applicable, add screenshots to help explain your problem.
-
-**Desktop (please complete the following information):**
- - OS: [e.g. iOS]
- - Browser [e.g. chrome, safari]
- - Version [e.g. 22]
-
-**Smartphone (please complete the following information):**
- - Device: [e.g. iPhone6]
- - OS: [e.g. iOS8.1]
- - Browser [e.g. stock browser, safari]
- - Version [e.g. 22]
-
-**Additional context**
-Add any other context about the problem here.
diff --git a/spaces/EronSamez/RVC_HFmeu/utils/backups.py b/spaces/EronSamez/RVC_HFmeu/utils/backups.py
deleted file mode 100644
index b814f8184792e80e2324685436053d61487110b1..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/utils/backups.py
+++ /dev/null
@@ -1,141 +0,0 @@
-import os
-import shutil
-import hashlib
-import time
-import base64
-
-
-
-
-LOGS_FOLDER = '/content/Applio-RVC-Fork/logs'
-WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights'
-GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup'
-
-def import_google_drive_backup():
- print("Importing Google Drive backup...")
- weights_exist = False
- for root, dirs, files in os.walk(GOOGLE_DRIVE_PATH):
- for filename in files:
- filepath = os.path.join(root, filename)
- if os.path.isfile(filepath) and not filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')):
- backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH))
- backup_folderpath = os.path.dirname(backup_filepath)
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created backup folder: {backup_folderpath}', flush=True)
- shutil.copy2(filepath, backup_filepath) # copy file with metadata
- print(f'Imported file from Google Drive backup: {filename}')
- elif filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')) and filename.endswith('.pth'):
- weights_exist = True
- weights_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, os.path.join(GOOGLE_DRIVE_PATH, 'weights')))
- weights_folderpath = os.path.dirname(weights_filepath)
- if not os.path.exists(weights_folderpath):
- os.makedirs(weights_folderpath)
- print(f'Created weights folder: {weights_folderpath}', flush=True)
- shutil.copy2(filepath, weights_filepath) # copy file with metadata
- print(f'Imported file from weights: {filename}')
- if weights_exist:
- print("Copied weights from Google Drive backup to local weights folder.")
- else:
- print("No weights found in Google Drive backup.")
- print("Google Drive backup import completed.")
-
-def get_md5_hash(file_path):
- hash_md5 = hashlib.md5()
- with open(file_path, "rb") as f:
- for chunk in iter(lambda: f.read(4096), b""):
- hash_md5.update(chunk)
- return hash_md5.hexdigest()
-
-def copy_weights_folder_to_drive():
- destination_folder = os.path.join(GOOGLE_DRIVE_PATH, 'weights')
- try:
- if not os.path.exists(destination_folder):
- os.makedirs(destination_folder)
-
- num_copied = 0
- for filename in os.listdir(WEIGHTS_FOLDER):
- if filename.endswith('.pth'):
- source_file = os.path.join(WEIGHTS_FOLDER, filename)
- destination_file = os.path.join(destination_folder, filename)
- if not os.path.exists(destination_file):
- shutil.copy2(source_file, destination_file)
- num_copied += 1
- print(f"Copied {filename} to Google Drive!")
-
- if num_copied == 0:
- print("No new finished models found for copying.")
- else:
- print(f"Finished copying {num_copied} files to Google Drive!")
-
- except Exception as e:
- print(f"An error occurred while copying weights: {str(e)}")
- # You can log the error or take appropriate actions here.
-
-def backup_files():
- print("\nStarting backup loop...")
- last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt')
- fully_updated = False # boolean to track if all files are up to date
-
- while True:
- try:
- updated = False # flag to check if any files were updated
- last_backup_timestamps = {}
-
- try:
- with open(last_backup_timestamps_path, 'r') as f:
- last_backup_timestamps = dict(line.strip().split(':') for line in f)
- except FileNotFoundError:
- pass # File does not exist yet, which is fine
-
- for root, dirs, files in os.walk(LOGS_FOLDER):
- for filename in files:
- if filename != 'last_backup_timestamps.txt':
- filepath = os.path.join(root, filename)
- if os.path.isfile(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- backup_folderpath = os.path.dirname(backup_filepath)
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created backup folder: {backup_folderpath}', flush=True)
- # check if file has changed since last backup
- last_backup_timestamp = last_backup_timestamps.get(filepath)
- current_timestamp = os.path.getmtime(filepath)
- if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp:
- shutil.copy2(filepath, backup_filepath) # copy file with metadata
- last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp
- if last_backup_timestamp is None:
- print(f'Backed up file: {filename}')
- else:
- print(f'Updating backed up file: {filename}')
- updated = True
- fully_updated = False # if a file is updated, all files are not up to date
-
- # check if any files were deleted in Colab and delete them from the backup drive
- for filepath in list(last_backup_timestamps.keys()):
- if not os.path.exists(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- if os.path.exists(backup_filepath):
- os.remove(backup_filepath)
- print(f'Deleted file: {filepath}')
- del last_backup_timestamps[filepath]
- updated = True
- fully_updated = False # if a file is deleted, all files are not up to date
-
- if not updated and not fully_updated:
- print("Files are up to date.")
- fully_updated = True # if all files are up to date, set the boolean to True
- copy_weights_folder_to_drive()
- sleep_time = 15
- else:
- sleep_time = 0.1
-
- with open(last_backup_timestamps_path, 'w') as f:
- for filepath, timestamp in last_backup_timestamps.items():
- f.write(f'{filepath}:{timestamp}\n')
-
- time.sleep(sleep_time) # wait for 15 seconds before checking again, or 0.1s if not fully up to date to speed up backups
-
- except Exception as e:
- print(f"An error occurred: {str(e)}")
- # You can log the error or take appropriate actions here.
diff --git a/spaces/Faridmaruf/rvc-genshin-v2/app.py b/spaces/Faridmaruf/rvc-genshin-v2/app.py
deleted file mode 100644
index b545c33df5f8714308d872bc6cab208485e14e6b..0000000000000000000000000000000000000000
--- a/spaces/Faridmaruf/rvc-genshin-v2/app.py
+++ /dev/null
@@ -1,516 +0,0 @@
-import os
-import glob
-import json
-import traceback
-import logging
-import gradio as gr
-import numpy as np
-import librosa
-import torch
-import asyncio
-import edge_tts
-import yt_dlp
-import ffmpeg
-import subprocess
-import sys
-import io
-import wave
-from datetime import datetime
-from fairseq import checkpoint_utils
-from lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from vc_infer_pipeline import VC
-from config import Config
-config = Config()
-logging.getLogger("numba").setLevel(logging.WARNING)
-limitation = os.getenv("SYSTEM") == "spaces"
-
-audio_mode = []
-f0method_mode = []
-f0method_info = ""
-if limitation is True:
- audio_mode = ["Upload audio", "TTS Audio"]
- f0method_mode = ["pm", "harvest"]
- f0method_info = "PM is fast, Harvest is good but extremely slow. (Default: PM)"
-else:
- audio_mode = ["Input path", "Upload audio", "Youtube", "TTS Audio"]
- f0method_mode = ["pm", "harvest", "crepe"]
- f0method_info = "PM is fast, Harvest is good but extremely slow, and Crepe effect is good but requires GPU (Default: PM)"
-
-def create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, file_index):
- def vc_fn(
- vc_audio_mode,
- vc_input,
- vc_upload,
- tts_text,
- tts_voice,
- f0_up_key,
- f0_method,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- ):
- try:
- print(f"Converting using {model_name}...")
- if vc_audio_mode == "Input path" or "Youtube" and vc_input != "":
- audio, sr = librosa.load(vc_input, sr=16000, mono=True)
- elif vc_audio_mode == "Upload audio":
- if vc_upload is None:
- return "You need to upload an audio", None
- sampling_rate, audio = vc_upload
- duration = audio.shape[0] / sampling_rate
- if duration > 20 and limitation:
- return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- elif vc_audio_mode == "TTS Audio":
- if len(tts_text) > 100 and limitation:
- return "Text is too long", None
- if tts_text is None or tts_voice is None:
- return "You need to enter text and select a voice", None
- asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3"))
- audio, sr = librosa.load("tts.mp3", sr=16000, mono=True)
- vc_input = "tts.mp3"
- times = [0, 0, 0]
- f0_up_key = int(f0_up_key)
- audio_opt = vc.pipeline(
- hubert_model,
- net_g,
- 0,
- audio,
- vc_input,
- times,
- f0_up_key,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- protect,
- f0_file=None,
- )
- info = f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s"
- print(f"{model_name} | {info}")
- return info, (tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- print(info)
- return info, None
- return vc_fn
-
-def load_model():
- categories = []
- with open("weights/folder_info.json", "r", encoding="utf-8") as f:
- folder_info = json.load(f)
- for category_name, category_info in folder_info.items():
- if not category_info['enable']:
- continue
- category_title = category_info['title']
- category_folder = category_info['folder_path']
- description = category_info['description']
- models = []
- with open(f"weights/{category_folder}/model_info.json", "r", encoding="utf-8") as f:
- models_info = json.load(f)
- for character_name, info in models_info.items():
- if not info['enable']:
- continue
- model_title = info['title']
- model_name = info['model_path']
- model_author = info.get("author", None)
- model_cover = f"weights/{category_folder}/{character_name}/{info['cover']}"
- model_index = f"weights/{category_folder}/{character_name}/{info['feature_retrieval_library']}"
- cpt = torch.load(f"weights/{category_folder}/{character_name}/{model_name}", map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- model_version = "V1"
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- model_version = "V2"
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False))
- net_g.eval().to(config.device)
- if config.is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, config)
- print(f"Model loaded: {character_name} / {info['feature_retrieval_library']} | ({model_version})")
- models.append((character_name, model_title, model_author, model_cover, model_version, create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, model_index)))
- categories.append([category_title, category_folder, description, models])
- return categories
-
-def cut_vocal_and_inst(url, audio_provider, split_model):
- if url != "":
- if not os.path.exists("dl_audio"):
- os.mkdir("dl_audio")
- if audio_provider == "Youtube":
- ydl_opts = {
- 'noplaylist': True,
- 'format': 'bestaudio/best',
- 'postprocessors': [{
- 'key': 'FFmpegExtractAudio',
- 'preferredcodec': 'wav',
- }],
- "outtmpl": 'dl_audio/youtube_audio',
- }
- with yt_dlp.YoutubeDL(ydl_opts) as ydl:
- ydl.download([url])
- audio_path = "dl_audio/youtube_audio.wav"
- if split_model == "htdemucs":
- command = f"demucs --two-stems=vocals {audio_path} -o output"
- result = subprocess.run(command.split(), stdout=subprocess.PIPE)
- print(result.stdout.decode())
- return "output/htdemucs/youtube_audio/vocals.wav", "output/htdemucs/youtube_audio/no_vocals.wav", audio_path, "output/htdemucs/youtube_audio/vocals.wav"
- else:
- command = f"demucs --two-stems=vocals -n mdx_extra_q {audio_path} -o output"
- result = subprocess.run(command.split(), stdout=subprocess.PIPE)
- print(result.stdout.decode())
- return "output/mdx_extra_q/youtube_audio/vocals.wav", "output/mdx_extra_q/youtube_audio/no_vocals.wav", audio_path, "output/mdx_extra_q/youtube_audio/vocals.wav"
- else:
- raise gr.Error("URL Required!")
- return None, None, None, None
-
-def combine_vocal_and_inst(audio_data, audio_volume, split_model):
- if not os.path.exists("output/result"):
- os.mkdir("output/result")
- vocal_path = "output/result/output.wav"
- output_path = "output/result/combine.mp3"
- if split_model == "htdemucs":
- inst_path = "output/htdemucs/youtube_audio/no_vocals.wav"
- else:
- inst_path = "output/mdx_extra_q/youtube_audio/no_vocals.wav"
- with wave.open(vocal_path, "w") as wave_file:
- wave_file.setnchannels(1)
- wave_file.setsampwidth(2)
- wave_file.setframerate(audio_data[0])
- wave_file.writeframes(audio_data[1].tobytes())
- command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [1:a]volume={audio_volume}dB[v];[0:a][v]amix=inputs=2:duration=longest -b:a 320k -c:a libmp3lame {output_path}'
- result = subprocess.run(command.split(), stdout=subprocess.PIPE)
- print(result.stdout.decode())
- return output_path
-
-def load_hubert():
- global hubert_model
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(config.device)
- if config.is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- hubert_model.eval()
-
-def change_audio_mode(vc_audio_mode):
- if vc_audio_mode == "Input path":
- return (
- # Input & Upload
- gr.Textbox.update(visible=True),
- gr.Checkbox.update(visible=False),
- gr.Audio.update(visible=False),
- # Youtube
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False),
- gr.Button.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Button.update(visible=False),
- # TTS
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False)
- )
- elif vc_audio_mode == "Upload audio":
- return (
- # Input & Upload
- gr.Textbox.update(visible=False),
- gr.Checkbox.update(visible=True),
- gr.Audio.update(visible=True),
- # Youtube
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False),
- gr.Button.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Button.update(visible=False),
- # TTS
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False)
- )
- elif vc_audio_mode == "Youtube":
- return (
- # Input & Upload
- gr.Textbox.update(visible=False),
- gr.Checkbox.update(visible=False),
- gr.Audio.update(visible=False),
- # Youtube
- gr.Dropdown.update(visible=True),
- gr.Textbox.update(visible=True),
- gr.Dropdown.update(visible=True),
- gr.Button.update(visible=True),
- gr.Audio.update(visible=True),
- gr.Audio.update(visible=True),
- gr.Audio.update(visible=True),
- gr.Slider.update(visible=True),
- gr.Audio.update(visible=True),
- gr.Button.update(visible=True),
- # TTS
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False)
- )
- elif vc_audio_mode == "TTS Audio":
- return (
- # Input & Upload
- gr.Textbox.update(visible=False),
- gr.Checkbox.update(visible=False),
- gr.Audio.update(visible=False),
- # Youtube
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False),
- gr.Button.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Button.update(visible=False),
- # TTS
- gr.Textbox.update(visible=True),
- gr.Dropdown.update(visible=True)
- )
- else:
- return (
- # Input & Upload
- gr.Textbox.update(visible=False),
- gr.Checkbox.update(visible=True),
- gr.Audio.update(visible=True),
- # Youtube
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False),
- gr.Button.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Button.update(visible=False),
- # TTS
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False)
- )
-
-def use_microphone(microphone):
- if microphone == True:
- return gr.Audio.update(source="microphone")
- else:
- return gr.Audio.update(source="upload")
-
-if __name__ == '__main__':
- load_hubert()
- categories = load_model()
- tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices())
- voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list]
- with gr.Blocks() as app:
- gr.Markdown(
- "
\n\n"+
- "# Multi Model RVC Inference\n\n"+
- "[](https://github.com/ArkanDash/Multi-Model-RVC-Inference)\n\n"+
- "
"
- )
- for (folder_title, folder, description, models) in categories:
- with gr.TabItem(folder_title):
- if description:
- gr.Markdown(f"###
{description}")
- with gr.Tabs():
- if not models:
- gr.Markdown("#
No Model Loaded.")
- gr.Markdown("##
Please add model or fix your model path.")
- continue
- for (name, title, author, cover, model_version, vc_fn) in models:
- with gr.TabItem(name):
- with gr.Row():
- gr.Markdown(
- '
'
- f'
{title}
\n'+
- f'
RVC {model_version} Model
\n'+
- (f'
Model author: {author}
' if author else "")+
- (f'' if cover else "")+
- '
'
- )
- with gr.Row():
- with gr.Column():
- vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio")
- # Input
- vc_input = gr.Textbox(label="Input audio path", visible=False)
- # Upload
- vc_microphone_mode = gr.Checkbox(label="Use Microphone", value=False, visible=True, interactive=True)
- vc_upload = gr.Audio(label="Upload audio file", source="upload", visible=True, interactive=True)
- # Youtube
- vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)")
- vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...")
- vc_split_model = gr.Dropdown(label="Splitter Model", choices=["htdemucs", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)")
- vc_split = gr.Button("Split Audio", variant="primary", visible=False)
- vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False)
- vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False)
- vc_audio_preview = gr.Audio(label="Audio Preview", visible=False)
- # TTS
- tts_text = gr.Textbox(visible=False, label="TTS text", info="Text to speech input")
- tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female")
- with gr.Column():
- vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice')
- f0method0 = gr.Radio(
- label="Pitch extraction algorithm",
- info=f0method_info,
- choices=f0method_mode,
- value="pm",
- interactive=True
- )
- index_rate1 = gr.Slider(
- minimum=0,
- maximum=1,
- label="Retrieval feature ratio",
- info="(Default: 0.7)",
- value=0.7,
- interactive=True,
- )
- filter_radius0 = gr.Slider(
- minimum=0,
- maximum=7,
- label="Apply Median Filtering",
- info="The value represents the filter radius and can reduce breathiness.",
- value=3,
- step=1,
- interactive=True,
- )
- resample_sr0 = gr.Slider(
- minimum=0,
- maximum=48000,
- label="Resample the output audio",
- info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling",
- value=0,
- step=1,
- interactive=True,
- )
- rms_mix_rate0 = gr.Slider(
- minimum=0,
- maximum=1,
- label="Volume Envelope",
- info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used",
- value=1,
- interactive=True,
- )
- protect0 = gr.Slider(
- minimum=0,
- maximum=0.5,
- label="Voice Protection",
- info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy",
- value=0.5,
- step=0.01,
- interactive=True,
- )
- with gr.Column():
- vc_log = gr.Textbox(label="Output Information", interactive=False)
- vc_output = gr.Audio(label="Output Audio", interactive=False)
- vc_convert = gr.Button("Convert", variant="primary")
- vc_volume = gr.Slider(
- minimum=0,
- maximum=10,
- label="Vocal volume",
- value=4,
- interactive=True,
- step=1,
- info="Adjust vocal volume (Default: 4}",
- visible=False
- )
- vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False)
- vc_combine = gr.Button("Combine",variant="primary", visible=False)
- vc_convert.click(
- fn=vc_fn,
- inputs=[
- vc_audio_mode,
- vc_input,
- vc_upload,
- tts_text,
- tts_voice,
- vc_transform0,
- f0method0,
- index_rate1,
- filter_radius0,
- resample_sr0,
- rms_mix_rate0,
- protect0,
- ],
- outputs=[vc_log ,vc_output]
- )
- vc_split.click(
- fn=cut_vocal_and_inst,
- inputs=[vc_link, vc_download_audio, vc_split_model],
- outputs=[vc_vocal_preview, vc_inst_preview, vc_audio_preview, vc_input]
- )
- vc_combine.click(
- fn=combine_vocal_and_inst,
- inputs=[vc_output, vc_volume, vc_split_model],
- outputs=[vc_combined_output]
- )
- vc_microphone_mode.change(
- fn=use_microphone,
- inputs=vc_microphone_mode,
- outputs=vc_upload
- )
- vc_audio_mode.change(
- fn=change_audio_mode,
- inputs=[vc_audio_mode],
- outputs=[
- vc_input,
- vc_microphone_mode,
- vc_upload,
- vc_download_audio,
- vc_link,
- vc_split_model,
- vc_split,
- vc_vocal_preview,
- vc_inst_preview,
- vc_audio_preview,
- vc_volume,
- vc_combined_output,
- vc_combine,
- tts_text,
- tts_voice
- ]
- )
- app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=config.colab)
\ No newline at end of file
diff --git a/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/modules.py b/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/modules.py
deleted file mode 100644
index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000
--- a/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/modules.py
+++ /dev/null
@@ -1,522 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-from lib.infer_pack import commons
-from lib.infer_pack.commons import init_weights, get_padding
-from lib.infer_pack.transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/logger/saver.py b/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/logger/saver.py
deleted file mode 100644
index ef78b52b6bcd32106f962b731d3784d72d5f0cce..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/logger/saver.py
+++ /dev/null
@@ -1,150 +0,0 @@
-'''
-author: wayn391@mastertones
-'''
-
-import os
-import json
-import time
-import yaml
-import datetime
-import torch
-import matplotlib.pyplot as plt
-from . import utils
-from torch.utils.tensorboard import SummaryWriter
-
-class Saver(object):
- def __init__(
- self,
- args,
- initial_global_step=-1):
-
- self.expdir = args.env.expdir
- self.sample_rate = args.data.sampling_rate
-
- # cold start
- self.global_step = initial_global_step
- self.init_time = time.time()
- self.last_time = time.time()
-
- # makedirs
- os.makedirs(self.expdir, exist_ok=True)
-
- # path
- self.path_log_info = os.path.join(self.expdir, 'log_info.txt')
-
- # ckpt
- os.makedirs(self.expdir, exist_ok=True)
-
- # writer
- self.writer = SummaryWriter(os.path.join(self.expdir, 'logs'))
-
- # save config
- path_config = os.path.join(self.expdir, 'config.yaml')
- with open(path_config, "w") as out_config:
- yaml.dump(dict(args), out_config)
-
-
- def log_info(self, msg):
- '''log method'''
- if isinstance(msg, dict):
- msg_list = []
- for k, v in msg.items():
- tmp_str = ''
- if isinstance(v, int):
- tmp_str = '{}: {:,}'.format(k, v)
- else:
- tmp_str = '{}: {}'.format(k, v)
-
- msg_list.append(tmp_str)
- msg_str = '\n'.join(msg_list)
- else:
- msg_str = msg
-
- # dsplay
- print(msg_str)
-
- # save
- with open(self.path_log_info, 'a') as fp:
- fp.write(msg_str+'\n')
-
- def log_value(self, dict):
- for k, v in dict.items():
- self.writer.add_scalar(k, v, self.global_step)
-
- def log_spec(self, name, spec, spec_out, vmin=-14, vmax=3.5):
- spec_cat = torch.cat([(spec_out - spec).abs() + vmin, spec, spec_out], -1)
- spec = spec_cat[0]
- if isinstance(spec, torch.Tensor):
- spec = spec.cpu().numpy()
- fig = plt.figure(figsize=(12, 9))
- plt.pcolor(spec.T, vmin=vmin, vmax=vmax)
- plt.tight_layout()
- self.writer.add_figure(name, fig, self.global_step)
-
- def log_audio(self, dict):
- for k, v in dict.items():
- self.writer.add_audio(k, v, global_step=self.global_step, sample_rate=self.sample_rate)
-
- def get_interval_time(self, update=True):
- cur_time = time.time()
- time_interval = cur_time - self.last_time
- if update:
- self.last_time = cur_time
- return time_interval
-
- def get_total_time(self, to_str=True):
- total_time = time.time() - self.init_time
- if to_str:
- total_time = str(datetime.timedelta(
- seconds=total_time))[:-5]
- return total_time
-
- def save_model(
- self,
- model,
- optimizer,
- name='model',
- postfix='',
- to_json=False):
- # path
- if postfix:
- postfix = '_' + postfix
- path_pt = os.path.join(
- self.expdir , name+postfix+'.pt')
-
- # check
- print(' [*] model checkpoint saved: {}'.format(path_pt))
-
- # save
- if optimizer is not None:
- torch.save({
- 'global_step': self.global_step,
- 'model': model.state_dict(),
- 'optimizer': optimizer.state_dict()}, path_pt)
- else:
- torch.save({
- 'global_step': self.global_step,
- 'model': model.state_dict()}, path_pt)
-
- # to json
- if to_json:
- path_json = os.path.join(
- self.expdir , name+'.json')
- utils.to_json(path_params, path_json)
-
- def delete_model(self, name='model', postfix=''):
- # path
- if postfix:
- postfix = '_' + postfix
- path_pt = os.path.join(
- self.expdir , name+postfix+'.pt')
-
- # delete
- if os.path.exists(path_pt):
- os.remove(path_pt)
- print(' [*] model checkpoint deleted: {}'.format(path_pt))
-
- def global_step_increment(self):
- self.global_step += 1
-
-
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fast_rcnn/fast_rcnn_r101_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/fast_rcnn/fast_rcnn_r101_fpn_1x_coco.py
deleted file mode 100644
index 9a76b3997fbbed5883adde2122dc17ee2262fa80..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fast_rcnn/fast_rcnn_r101_fpn_1x_coco.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './fast_rcnn_r50_fpn_1x_coco.py'
-model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101))
diff --git a/spaces/HaHaBill/LandShapes-Antarctica/interactive.py b/spaces/HaHaBill/LandShapes-Antarctica/interactive.py
deleted file mode 100644
index f2a95cf96173424b4939d0aa53b89e0460e2bc27..0000000000000000000000000000000000000000
--- a/spaces/HaHaBill/LandShapes-Antarctica/interactive.py
+++ /dev/null
@@ -1,655 +0,0 @@
-# Copyright 2020 Erik Härkönen. All rights reserved.
-# This file is licensed to you under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License. You may obtain a copy
-# of the License at http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software distributed under
-# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS
-# OF ANY KIND, either express or implied. See the License for the specific language
-# governing permissions and limitations under the License.
-
-# An interactive glumpy (OpenGL) + tkinter viewer for interacting with principal components.
-# Requires OpenGL and CUDA support for rendering.
-
-import torch
-import numpy as np
-import tkinter as tk
-from tkinter import ttk
-from types import SimpleNamespace
-import matplotlib.pyplot as plt
-from pathlib import Path
-from os import makedirs
-from models import get_instrumented_model
-from config import Config
-from decomposition import get_or_compute
-from torch.nn.functional import interpolate
-from TkTorchWindow import TorchImageView
-from functools import partial
-from platform import system
-from PIL import Image
-from utils import pad_frames, prettify_name
-import pickle
-
-# For platform specific UI tweaks
-is_windows = 'Windows' in system()
-is_linux = 'Linux' in system()
-is_mac = 'Darwin' in system()
-
-# Read input parameters
-args = Config().from_args()
-
-# Don't bother without GPU
-assert torch.cuda.is_available(), 'Interactive mode requires CUDA'
-
-# Use syntax from paper
-def get_edit_name(idx, s, e, name=None):
- return 'E({comp}, {edit_range}){edit_name}'.format(
- comp = idx,
- edit_range = f'{s}-{e}' if e > s else s,
- edit_name = f': {name}' if name else ''
- )
-
-# Load or compute PCA basis vectors
-def load_components(class_name, inst):
- global components, state, use_named_latents
-
- config = args.from_dict({ 'output_class': class_name })
- dump_name = get_or_compute(config, inst)
- data = np.load(dump_name, allow_pickle=False)
- X_comp = data['act_comp']
- X_mean = data['act_mean']
- X_stdev = data['act_stdev']
- Z_comp = data['lat_comp']
- Z_mean = data['lat_mean']
- Z_stdev = data['lat_stdev']
- random_stdev_act = np.mean(data['random_stdevs'])
- n_comp = X_comp.shape[0]
- data.close()
-
- # Transfer to GPU
- components = SimpleNamespace(
- X_comp = torch.from_numpy(X_comp).cuda().float(),
- X_mean = torch.from_numpy(X_mean).cuda().float(),
- X_stdev = torch.from_numpy(X_stdev).cuda().float(),
- Z_comp = torch.from_numpy(Z_comp).cuda().float(),
- Z_stdev = torch.from_numpy(Z_stdev).cuda().float(),
- Z_mean = torch.from_numpy(Z_mean).cuda().float(),
- names = [f'Component {i}' for i in range(n_comp)],
- latent_types = [model.latent_space_name()]*n_comp,
- ranges = [(0, model.get_max_latents())]*n_comp,
- )
-
- state.component_class = class_name # invalidates cache
- use_named_latents = False
- print('Loaded components for', class_name, 'from', dump_name)
-
-# Load previously exported named components from
-# directory specified with '--inputs=path/to/comp'
-def load_named_components(path, class_name):
- global components, state, use_named_latents
-
- import glob
- matches = glob.glob(f'{path}/*.pkl')
-
- selected = []
- for dump_path in matches:
- with open(dump_path, 'rb') as f:
- data = pickle.load(f)
- if data['model_name'] != model_name or data['output_class'] != class_name:
- continue
-
- if data['latent_space'] != model.latent_space_name():
- print('Skipping', dump_path, '(wrong latent space)')
- continue
-
- selected.append(data)
- print('Using', dump_path)
-
- if len(selected) == 0:
- raise RuntimeError('No valid components in given path.')
-
- comp_dict = { k : [] for k in ['X_comp', 'Z_comp', 'X_stdev', 'Z_stdev', 'names', 'types', 'layer_names', 'ranges', 'latent_types'] }
- components = SimpleNamespace(**comp_dict)
-
- for d in selected:
- s = d['edit_start']
- e = d['edit_end']
- title = get_edit_name(d['component_index'], s, e - 1, d['name']) # show inclusive
- components.X_comp.append(torch.from_numpy(d['act_comp']).cuda())
- components.Z_comp.append(torch.from_numpy(d['lat_comp']).cuda())
- components.X_stdev.append(d['act_stdev'])
- components.Z_stdev.append(d['lat_stdev'])
- components.names.append(title)
- components.types.append(d['edit_type'])
- components.layer_names.append(d['decomposition']['layer']) # only for act
- components.ranges.append((s, e))
- components.latent_types.append(d['latent_space']) # W or Z
-
- use_named_latents = True
- print('Loaded named components')
-
-def setup_model():
- global model, inst, layer_name, model_name, feat_shape, args, class_name
-
- model_name = args.model
- layer_name = args.layer
- class_name = args.output_class
-
- # Speed up pytorch
- torch.autograd.set_grad_enabled(False)
- torch.backends.cudnn.benchmark = True
-
- # Load model
- inst = get_instrumented_model(model_name, class_name, layer_name, torch.device('cuda'), use_w=args.use_w)
- model = inst.model
-
- feat_shape = inst.feature_shape[layer_name]
- sample_dims = np.prod(feat_shape)
-
- # Initialize
- if args.inputs:
- load_named_components(args.inputs, class_name)
- else:
- load_components(class_name, inst)
-
-# Project tensor 'X' onto orthonormal basis 'comp', return coordinates
-def project_ortho(X, comp):
- N = comp.shape[0]
- coords = (comp.reshape(N, -1) * X.reshape(-1)).sum(dim=1)
- return coords.reshape([N]+[1]*X.ndim)
-
-def zero_sliders():
- for v in ui_state.sliders:
- v.set(0.0)
-
-def reset_sliders(zero_on_failure=True):
- global ui_state
-
- mode = ui_state.mode.get()
-
- # Not orthogonal: need to solve least-norm problem
- # Not batch size 1: one set of sliders not enough
- # Not principal components: unsupported format
- is_ortho = not (mode == 'latent' and model.latent_space_name() == 'Z')
- is_single = state.z.shape[0] == 1
- is_pcs = not use_named_latents
-
- state.lat_slider_offset = 0
- state.act_slider_offset = 0
-
- enabled = False
- if not (enabled and is_ortho and is_single and is_pcs):
- if zero_on_failure:
- zero_sliders()
- return
-
- if mode == 'activation':
- val = state.base_act
- mean = components.X_mean
- comp = components.X_comp
- stdev = components.X_stdev
- else:
- val = state.z
- mean = components.Z_mean
- comp = components.Z_comp
- stdev = components.Z_stdev
-
- n_sliders = len(ui_state.sliders)
- coords = project_ortho(val - mean, comp)
- offset = torch.sum(coords[:n_sliders] * comp[:n_sliders], dim=0)
- scaled_coords = (coords.view(-1) / stdev).detach().cpu().numpy()
-
- # Part representable by sliders
- if mode == 'activation':
- state.act_slider_offset = offset
- else:
- state.lat_slider_offset = offset
-
- for i in range(n_sliders):
- ui_state.sliders[i].set(round(scaled_coords[i], ndigits=1))
-
-def setup_ui():
- global root, toolbar, ui_state, app, canvas
-
- root = tk.Tk()
- scale = 1.0
- app = TorchImageView(root, width=int(scale*1024), height=int(scale*1024), show_fps=False)
- app.pack(fill=tk.BOTH, expand=tk.YES)
- root.protocol("WM_DELETE_WINDOW", shutdown)
- root.title('GANspace')
-
- toolbar = tk.Toplevel(root)
- toolbar.protocol("WM_DELETE_WINDOW", shutdown)
- toolbar.geometry("215x800+0+0")
- toolbar.title('')
-
- N_COMPONENTS = min(70, len(components.names))
- ui_state = SimpleNamespace(
- sliders = [tk.DoubleVar(value=0.0) for _ in range(N_COMPONENTS)],
- scales = [],
- truncation = tk.DoubleVar(value=0.9),
- outclass = tk.StringVar(value=class_name),
- random_seed = tk.StringVar(value='0'),
- mode = tk.StringVar(value='latent'),
- batch_size = tk.IntVar(value=1), # how many images to show in window
- edit_layer_start = tk.IntVar(value=0),
- edit_layer_end = tk.IntVar(value=model.get_max_latents() - 1),
- slider_max_val = 10.0
- )
-
- # Z vs activation mode button
- #tk.Radiobutton(toolbar, text=f"Latent ({model.latent_space_name()})", variable=ui_state.mode, command=reset_sliders, value='latent').pack(fill="x")
- #tk.Radiobutton(toolbar, text="Activation", variable=ui_state.mode, command=reset_sliders, value='activation').pack(fill="x")
-
- # Choose range where latents are modified
- def set_min(val):
- ui_state.edit_layer_start.set(min(int(val), ui_state.edit_layer_end.get()))
- def set_max(val):
- ui_state.edit_layer_end.set(max(int(val), ui_state.edit_layer_start.get()))
- max_latent_idx = model.get_max_latents() - 1
-
- if not use_named_latents:
- slider_min = tk.Scale(toolbar, command=set_min, variable=ui_state.edit_layer_start,
- label='Layer start', from_=0, to=max_latent_idx, orient=tk.HORIZONTAL).pack(fill="x")
- slider_max = tk.Scale(toolbar, command=set_max, variable=ui_state.edit_layer_end,
- label='Layer end', from_=0, to=max_latent_idx, orient=tk.HORIZONTAL).pack(fill="x")
-
- # Scrollable list of components
- outer_frame = tk.Frame(toolbar, borderwidth=2, relief=tk.SUNKEN)
- canvas = tk.Canvas(outer_frame, highlightthickness=0, borderwidth=0)
- frame = tk.Frame(canvas)
- vsb = tk.Scrollbar(outer_frame, orient="vertical", command=canvas.yview)
- canvas.configure(yscrollcommand=vsb.set)
-
- vsb.pack(side="right", fill="y")
- canvas.pack(side="left", fill="both", expand=True)
- canvas.create_window((4,4), window=frame, anchor="nw")
-
- def onCanvasConfigure(event):
- canvas.itemconfigure("all", width=event.width)
- canvas.configure(scrollregion=canvas.bbox("all"))
- canvas.bind("", onCanvasConfigure)
-
- def on_scroll(event):
- delta = 1 if (event.num == 5 or event.delta < 0) else -1
- canvas.yview_scroll(delta, "units")
-
- canvas.bind_all("", on_scroll)
- canvas.bind_all("", on_scroll)
- canvas.bind_all("", on_scroll)
- canvas.bind_all("", lambda event : handle_keypress(event.keysym_num))
-
- # Sliders and buttons
- for i in range(N_COMPONENTS):
- inner = tk.Frame(frame, borderwidth=1, background="#aaaaaa")
- scale = tk.Scale(inner, variable=ui_state.sliders[i], from_=-ui_state.slider_max_val,
- to=ui_state.slider_max_val, resolution=0.1, orient=tk.HORIZONTAL, label=components.names[i])
- scale.pack(fill=tk.X, side=tk.LEFT, expand=True)
- ui_state.scales.append(scale) # for changing label later
- if not use_named_latents:
- tk.Button(inner, text=f"Save", command=partial(export_direction, i, inner)).pack(fill=tk.Y, side=tk.RIGHT)
- inner.pack(fill=tk.X)
-
- outer_frame.pack(fill="both", expand=True, pady=0)
-
- tk.Button(toolbar, text="Reset", command=reset_sliders).pack(anchor=tk.CENTER, fill=tk.X, padx=4, pady=4)
-
- tk.Scale(toolbar, variable=ui_state.truncation, from_=0.01, to=1.0,
- resolution=0.01, orient=tk.HORIZONTAL, label='Truncation').pack(fill="x")
-
- tk.Scale(toolbar, variable=ui_state.batch_size, from_=1, to=9,
- resolution=1, orient=tk.HORIZONTAL, label='Batch size').pack(fill="x")
-
- # Output class
- frame = tk.Frame(toolbar)
- tk.Label(frame, text="Class name").pack(fill="x", side="left")
- tk.Entry(frame, textvariable=ui_state.outclass).pack(fill="x", side="right", expand=True, padx=5)
- frame.pack(fill=tk.X, pady=3)
-
- # Random seed
- def update_seed():
- seed_str = ui_state.random_seed.get()
- if seed_str.isdigit():
- resample_latent(int(seed_str))
- frame = tk.Frame(toolbar)
- tk.Label(frame, text="Seed").pack(fill="x", side="left")
- tk.Entry(frame, textvariable=ui_state.random_seed, width=12).pack(fill="x", side="left", expand=True, padx=2)
- tk.Button(frame, text="Update", command=update_seed).pack(fill="y", side="right", padx=3)
- frame.pack(fill=tk.X, pady=3)
-
- # Get new latent or new components
- tk.Button(toolbar, text="Resample latent", command=partial(resample_latent, None, False)).pack(anchor=tk.CENTER, fill=tk.X, padx=4, pady=4)
- #tk.Button(toolbar, text="Recompute", command=recompute_components).pack(anchor=tk.CENTER, fill=tk.X)
-
-# App state
-state = SimpleNamespace(
- z=None, # current latent(s)
- lat_slider_offset = 0, # part of lat that is explained by sliders
- act_slider_offset = 0, # part of act that is explained by sliders
- component_class=None, # name of current PCs' image class
- seed=0, # Latent z_i generated by seed+i
- base_act = None, # activation of considered layer given z
-)
-
-def resample_latent(seed=None, only_style=False):
- class_name = ui_state.outclass.get()
- if class_name.isnumeric():
- class_name = int(class_name)
-
- if hasattr(model, 'is_valid_class'):
- if not model.is_valid_class(class_name):
- return
-
- model.set_output_class(class_name)
-
- B = ui_state.batch_size.get()
- state.seed = np.random.randint(np.iinfo(np.int32).max - B) if seed is None else seed
- ui_state.random_seed.set(str(state.seed))
-
- # Use consecutive seeds along batch dimension (for easier reproducibility)
- trunc = ui_state.truncation.get()
- latents = [model.sample_latent(1, seed=state.seed + i, truncation=trunc) for i in range(B)]
-
- state.z = torch.cat(latents).clone().detach() # make leaf node
- assert state.z.is_leaf, 'Latent is not leaf node!'
-
- if hasattr(model, 'truncation'):
- model.truncation = ui_state.truncation.get()
- print(f'Seeds: {state.seed} -> {state.seed + B - 1}' if B > 1 else f'Seed: {state.seed}')
-
- torch.manual_seed(state.seed)
- model.partial_forward(state.z, layer_name)
- state.base_act = inst.retained_features()[layer_name]
-
- reset_sliders(zero_on_failure=False)
-
- # Remove focus from text entry
- canvas.focus_set()
-
-# Used to recompute after changing class of conditional model
-def recompute_components():
- class_name = ui_state.outclass.get()
- if class_name.isnumeric():
- class_name = int(class_name)
-
- if hasattr(model, 'is_valid_class'):
- if not model.is_valid_class(class_name):
- return
-
- if hasattr(model, 'set_output_class'):
- model.set_output_class(class_name)
-
- load_components(class_name, inst)
-
-# Used to detect parameter changes for lazy recomputation
-class ParamCache():
- def update(self, **kwargs):
- dirty = False
- for argname, val in kwargs.items():
- # Check pointer, then value
- current = getattr(self, argname, 0)
- if current is not val and pickle.dumps(current) != pickle.dumps(val):
- setattr(self, argname, val)
- dirty = True
- return dirty
-
-cache = ParamCache()
-
-def l2norm(t):
- return torch.norm(t.view(t.shape[0], -1), p=2, dim=1, keepdim=True)
-
-def apply_edit(z0, delta):
- return z0 + delta
-
-def reposition_toolbar():
- size, X, Y = root.winfo_geometry().split('+')
- W, H = size.split('x')
- toolbar_W = toolbar.winfo_geometry().split('x')[0]
- offset_y = -30 if is_linux else 0 # window title bar
- toolbar.geometry(f'{toolbar_W}x{H}+{int(X)-int(toolbar_W)}+{int(Y)+offset_y}')
- toolbar.update()
-
-def on_draw():
- global img
-
- n_comp = len(ui_state.sliders)
- slider_vals = np.array([s.get() for s in ui_state.sliders], dtype=np.float32)
-
- # Run model sparingly
- mode = ui_state.mode.get()
- latent_start = ui_state.edit_layer_start.get()
- latent_end = ui_state.edit_layer_end.get() + 1 # save as exclusive, show as inclusive
-
- if cache.update(coords=slider_vals, comp=state.component_class, mode=mode, z=state.z, s=latent_start, e=latent_end):
- with torch.no_grad():
- z_base = state.z - state.lat_slider_offset
- z_deltas = [0.0]*model.get_max_latents()
- z_delta_global = 0.0
-
- n_comp = slider_vals.size
- act_deltas = {}
-
- if torch.is_tensor(state.act_slider_offset):
- act_deltas[layer_name] = -state.act_slider_offset
-
- for space in components.latent_types:
- assert space == model.latent_space_name(), \
- 'Cannot mix latent spaces (for now)'
-
- for c in range(n_comp):
- coord = slider_vals[c]
- if coord == 0:
- continue
-
- edit_mode = components.types[c] if use_named_latents else mode
-
- # Activation offset
- if edit_mode in ['activation', 'both']:
- delta = components.X_comp[c] * components.X_stdev[c] * coord
- name = components.layer_names[c] if use_named_latents else layer_name
- act_deltas[name] = act_deltas.get(name, 0.0) + delta
-
- # Latent offset
- if edit_mode in ['latent', 'both']:
- delta = components.Z_comp[c] * components.Z_stdev[c] * coord
- edit_range = components.ranges[c] if use_named_latents else (latent_start, latent_end)
- full_range = (edit_range == (0, model.get_max_latents()))
-
- # Single or multiple offsets?
- if full_range:
- z_delta_global = z_delta_global + delta
- else:
- for l in range(*edit_range):
- z_deltas[l] = z_deltas[l] + delta
-
- # Apply activation deltas
- inst.remove_edits()
- for layer, delta in act_deltas.items():
- inst.edit_layer(layer, offset=delta)
-
- # Evaluate
- has_offsets = any(torch.is_tensor(t) for t in z_deltas)
- z_final = apply_edit(z_base, z_delta_global)
- if has_offsets:
- z_final = [apply_edit(z_final, d) for d in z_deltas]
- img = model.forward(z_final).clamp(0.0, 1.0)
-
- app.draw(img)
-
-# Save necessary data to disk for later loading
-def export_direction(idx, button_frame):
- name = tk.StringVar(value='')
- num_strips = tk.IntVar(value=0)
- strip_width = tk.IntVar(value=5)
-
- slider_values = np.array([s.get() for s in ui_state.sliders])
- slider_value = slider_values[idx]
- if (slider_values != 0).sum() > 1:
- print('Please modify only one slider')
- return
- elif slider_value == 0:
- print('Modify selected slider to set usable range (currently 0)')
- return
-
- popup = tk.Toplevel(root)
- popup.geometry("200x200+0+0")
- tk.Label(popup, text="Edit name").pack()
- tk.Entry(popup, textvariable=name).pack(pady=5)
- # tk.Scale(popup, from_=0, to=30, variable=num_strips,
- # resolution=1, orient=tk.HORIZONTAL, length=200, label='Image strips to export').pack()
- # tk.Scale(popup, from_=3, to=15, variable=strip_width,
- # resolution=1, orient=tk.HORIZONTAL, length=200, label='Image strip width').pack()
- tk.Button(popup, text='OK', command=popup.quit).pack()
-
- canceled = False
- def on_close():
- nonlocal canceled
- canceled = True
- popup.quit()
-
- popup.protocol("WM_DELETE_WINDOW", on_close)
- x = button_frame.winfo_rootx()
- y = button_frame.winfo_rooty()
- w = int(button_frame.winfo_geometry().split('x')[0])
- popup.geometry('%dx%d+%d+%d' % (180, 90, x + w, y))
- popup.mainloop()
- popup.destroy()
-
- # Update slider name
- label = get_edit_name(idx, ui_state.edit_layer_start.get(),
- ui_state.edit_layer_end.get(), name.get())
- ui_state.scales[idx].config(label=label)
-
- if canceled:
- return
-
- params = {
- 'name': name.get(),
- 'sigma_range': slider_value,
- 'component_index': idx,
- 'act_comp': components.X_comp[idx].detach().cpu().numpy(),
- 'lat_comp': components.Z_comp[idx].detach().cpu().numpy(), # either Z or W
- 'latent_space': model.latent_space_name(),
- 'act_stdev': components.X_stdev[idx].item(),
- 'lat_stdev': components.Z_stdev[idx].item(),
- 'model_name': model_name,
- 'output_class': ui_state.outclass.get(), # applied onto
- 'decomposition': {
- 'name': args.estimator,
- 'components': args.components,
- 'samples': args.n,
- 'layer': args.layer,
- 'class_name': state.component_class # computed from
- },
- 'edit_type': ui_state.mode.get(),
- 'truncation': ui_state.truncation.get(),
- 'edit_start': ui_state.edit_layer_start.get(),
- 'edit_end': ui_state.edit_layer_end.get() + 1, # show as inclusive, save as exclusive
- 'example_seed': state.seed,
- }
-
- edit_mode_str = params['edit_type']
- if edit_mode_str == 'latent':
- edit_mode_str = model.latent_space_name().lower()
-
- comp_class = state.component_class
- appl_class = params['output_class']
- if comp_class != appl_class:
- comp_class = f'{comp_class}_onto_{appl_class}'
-
- file_ident = "{model}-{name}-{cls}-{est}-{mode}-{layer}-comp{idx}-range{start}-{end}".format(
- model=model_name,
- name=prettify_name(params['name']),
- cls=comp_class,
- est=args.estimator,
- mode=edit_mode_str,
- layer=args.layer,
- idx=idx,
- start=params['edit_start'],
- end=params['edit_end'],
- )
-
- out_dir = Path(__file__).parent / 'out' / 'directions'
- makedirs(out_dir / file_ident, exist_ok=True)
-
- with open(out_dir / f"{file_ident}.pkl", 'wb') as outfile:
- pickle.dump(params, outfile)
-
- print(f'Direction "{name.get()}" saved as "{file_ident}.pkl"')
-
- batch_size = ui_state.batch_size.get()
- len_padded = ((num_strips.get() - 1) // batch_size + 1) * batch_size
- orig_seed = state.seed
-
- reset_sliders()
-
- # Limit max resolution
- max_H = 512
- ratio = min(1.0, max_H / inst.output_shape[2])
-
- strips = [[] for _ in range(len_padded)]
- for b in range(0, len_padded, batch_size):
- # Resample
- resample_latent((orig_seed + b) % np.iinfo(np.int32).max)
-
- sigmas = np.linspace(slider_value, -slider_value, strip_width.get(), dtype=np.float32)
- for sid, sigma in enumerate(sigmas):
- ui_state.sliders[idx].set(sigma)
-
- # Advance and show results on screen
- on_draw()
- root.update()
- app.update()
-
- batch_res = (255*img).byte().permute(0, 2, 3, 1).detach().cpu().numpy()
-
- for i, data in enumerate(batch_res):
- # Save individual
- name_nodots = file_ident.replace('.', '_')
- outname = out_dir / file_ident / f"{name_nodots}_ex{b+i}_{sid}.png"
- im = Image.fromarray(data)
- im = im.resize((int(ratio*im.size[0]), int(ratio*im.size[1])), Image.ANTIALIAS)
- im.save(outname)
- strips[b+i].append(data)
-
- for i, strip in enumerate(strips[:num_strips.get()]):
- print(f'Saving strip {i + 1}/{num_strips.get()}', end='\r', flush=True)
- data = np.hstack(pad_frames(strip))
- im = Image.fromarray(data)
- im = im.resize((int(ratio*im.size[0]), int(ratio*im.size[1])), Image.ANTIALIAS)
- im.save(out_dir / file_ident / f"{file_ident}_ex{i}.png")
-
- # Reset to original state
- resample_latent(orig_seed)
- ui_state.sliders[idx].set(slider_value)
-
-
-# Shared by glumpy and tkinter
-def handle_keypress(code):
- if code == 65307: # ESC
- shutdown()
- elif code == 65360: # HOME
- reset_sliders()
- elif code == 114: # R
- pass #reset_sliders()
-
-def shutdown():
- global pending_close
- pending_close = True
-
-def on_key_release(symbol, modifiers):
- handle_keypress(symbol)
-
-if __name__=='__main__':
- setup_model()
- setup_ui()
- resample_latent()
-
- pending_close = False
- while not pending_close:
- root.update()
- app.update()
- on_draw()
- reposition_toolbar()
-
- root.destroy()
\ No newline at end of file
diff --git a/spaces/HaMerL/ChaosinChat/assets/Kelpy-Codos.js b/spaces/HaMerL/ChaosinChat/assets/Kelpy-Codos.js
deleted file mode 100644
index d6873aec8e11d20d320ae41e1d743b047abc6fd0..0000000000000000000000000000000000000000
--- a/spaces/HaMerL/ChaosinChat/assets/Kelpy-Codos.js
+++ /dev/null
@@ -1,75 +0,0 @@
-// ==UserScript==
-// @name Kelpy Codos
-// @version 1.0.5
-// @author Keldos; https://keldos.me/
-// @description Add copy button to PRE tags before CODE tag, for ChaosinChatGPT especially.
-// Based on ChaosinChatGPT version: ac04408 (2023-3-22)
-// @license GPL-3.0
-// @grant none
-// ==/UserScript==
-
-(function () {
- 'use strict';
-
- function addCopyButton(pre) {
- var code = pre.querySelector('code');
- if (!code) {
- return; // 如果没有找到 元素,则不添加按钮
- }
- var firstChild = code.firstChild;
- if (!firstChild) {
- return; // 如果 元素没有子节点,则不添加按钮
- }
- var button = document.createElement('button');
- button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本
- button.style.position = 'relative';
- button.style.float = 'right';
- button.style.fontSize = '1em'; // 可选:调整按钮大小
- button.style.background = 'none'; // 可选:去掉背景颜色
- button.style.border = 'none'; // 可选:去掉边框
- button.style.cursor = 'pointer'; // 可选:显示指针样式
- button.addEventListener('click', function () {
- var range = document.createRange();
- range.selectNodeContents(code);
- range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前
- var selection = window.getSelection();
- selection.removeAllRanges();
- selection.addRange(range);
-
- try {
- var success = document.execCommand('copy');
- if (success) {
- button.textContent = '\u2714';
- setTimeout(function () {
- button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制”
- }, 2000);
- } else {
- button.textContent = '\u2716';
- }
- } catch (e) {
- console.error(e);
- button.textContent = '\u2716';
- }
-
- selection.removeAllRanges();
- });
- code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前
- }
-
- function handleNewElements(mutationsList, observer) {
- for (var mutation of mutationsList) {
- if (mutation.type === 'childList') {
- for (var node of mutation.addedNodes) {
- if (node.nodeName === 'PRE') {
- addCopyButton(node);
- }
- }
- }
- }
- }
-
- var observer = new MutationObserver(handleNewElements);
- observer.observe(document.documentElement, { childList: true, subtree: true });
-
- document.querySelectorAll('pre').forEach(addCopyButton);
-})();
diff --git a/spaces/Hahsgsgsy/teston/app.py b/spaces/Hahsgsgsy/teston/app.py
deleted file mode 100644
index b82d0e452ab7fab54c0ede2398bbcaa2f184cee2..0000000000000000000000000000000000000000
--- a/spaces/Hahsgsgsy/teston/app.py
+++ /dev/null
@@ -1,303 +0,0 @@
-from client import ClientRubika
-from rich import print as prints
-import asyncio
-
-auth = ['atjqbjkkroxsrtgssoovtmwoujiwbeev',
-'munbkdlfwwygqjfgyepmxesekanxzxhw',
-'qgvofrsyxjiijxjzagtdqknsgpqgcecd',
-'ymwjyfdjgxtztzltkqhgysrjtxszpcfp',
-'xphggmignuuzavartjgnhfjeybxtkape',
-'swlgxogibxepwfakgoxntdpvswfrelfu',
-'eclvsfpekmfrybfmcvsowdsywsczfgsg',
-'dcxfviqcbbdsgiaddvosaqeplwsfmvfi',
-'saiyinuwpgjvpztjdjrqayizpamiibgm',
-'huwglyreybcraxvswimpttwdcmkooqrl',
-'furrahbfdhnvymiodupfpqdqncdzzyoz',
-'obkyyueklrlkvvjqiamqxjlejbciznra',
-'espxetcfbtmabmiirvthahjunioculvl',
-'ubcjgcxkipwbbiuenvfyybkaulxhksrq',
-'helewpwywplttjrpelmiwznzyrxbzcaj',
-'qbitpkompdohybmllbtdiwbarqwoozsb',
-'seafttyxjppydogfxfuwevhjugurcwvc',
-'utczbcjsytrriksjmdyopochoubcxxnf',
-'cvfryrneoydhiveqggrthibrlmutfixf',
-'pviqxirsgwpvoveouwmthjqendscjfmx',
-'asifcwxcwwkmnkokvjjwabecsxpowpfm',
-'pqprqmxvhobjyoasdwshkaasildehgob',
-'xmepxwebnumzsbmlqtaogpjhnlpbrttz',
-'cyaixbkmrmdaaqqycfgubhwbpyehyqkd',
-'tuffefiwpuvwpgnrahcoathpfuyhtdil',
-'znzdbfledcjdnbohfsvrqrwwstuosqbk',
-'excecetesbknynleiqrwqqjxewmypmyo',
-'vhxxgclipwbxhffelpkpwtlrynjemgir',
-'jlivbmkbshkrluxcltujlnedytmbyxyx',
-'yjbeboliuhdallpibnvfpivgdnqcfoka',
-'svifiydupgzbnjteswfrrikafkimwdlx',
-'eyfkgqtoxutdlsktprwvhfkhlamibfsp',
-'tylddnxjuqyxcihdrialcifwuawkcsar',
-'kyhszzrmjnblobvmrjenujrithxumaeo',
-'qrderomlhpmynazcitfpijxbqouakpai',
-'iulddzteapsqfmwyfaskunzgugollqdx',
-'zxlouvuojbpxkkoadbkecrnltzumldim',
-'seyswyvrhbukdxaikxyqmuhvqmnaxhzm',
-'biblcviczavylxacwrwjofowsdemndyh',
-'zlzzwpjfbjoccjxzzpjkcxvzjdijezsm',
-'txrxpaieehgmijvqgoadivvbcotyniky',
-'isjsdndlixzilmojvwnrdvlyizquahbh',
-'cxklrrcxudcosgtecaghptydxvemlryh',
-'ukwuczeupbvhbhljfvwlshxsmdgqmjbw',
-'opbrlrabpsyrwslbolaaehlwkjmwjgnr',
-'ubhmqrkxmzhnunlinylcipnhhcyxxbqn',
-'amgmtbltspsuueigjufvrffjgopfzlef',
-'gccwrkuzwbnwgkvltgibnemmphpmjrou',
-'ndpyfgakusdwmsshezovmpdfvsjygmjt',
-'euhpemguyawhldzcwjgwujkwxnyvufxs','bkuafkwpaxkqerdheznoojkaxofzfset',
-'etoealirqvmvmtkuaftyfxqpiyepievm',
-'hlbxgylcnmoztbvplmxwpdgwxbdscswj',
-'nwsxvyaslgdjscpcygztnqsflqgpvrkh',
-'imltgtwjxdtxomughysevktnbcvzcnct',
-'esayqrdyeezdrpwwxtlrcuzbnjmcqgzk',
-'epnfrywianospdfqtncyckccbdhaypgr',
-'pbuzmrhsbqrscscejxefallaliailozj',
-'hpvbanqtoaczqdbpqliklmikeacsmuep',
-'zjommzqvlnamydjiuhnzjcanrbyfjkiw',
-'sgoqmdwaqcufwwbbrplvdtdtbwnqahao',
-'lwunthwrnpqvqodqaswrydkadmxfaohr',
-'ovubuwleutzueowjohubzogepxifwvxo',
-'pbtizwttympidcodcxjkmdmeugkqcfjz',
-'fwxizwqyeepdaylggbtkfzwxkyqonpqv',
-'tjeyvmnvxmyfbjxvvdzeauekswclqvtp',
-'pzqpweakoqyqhzktggmumokkrdkpzivy',
-'uwgapxdtudvpfaftbayjgkdkoxdvcqzu',
-'oqabvutkgsnnohagtuadbfmviwhtrtbe',
-'pdtpmdbmgbukwysdpfufmeyxsxywmfxu',
-'iiffyfknhfffyujduwzhpwckftkloiao',
-'ugczigwjbotfzfctgabkfgfrgrcswsbu',
-'lqtfzxoaxaoyfyzefgtlimweijxjluar',
-'judyzonyddpgmqwiowumeavwybzcgrcy',
-'mpfuyrcjtkgoigedxhsoagnwykbwkhcl',
-'kybfjgmiofzegdijatdlheedcrztwngr',
-'ddslcvmwkhqvdvfvtibeyueagdjbxbed',
-'qiwfdahzwpvabvqkbbfhprqxgjgwxips',
-'yswtrlctmxvwvigrbghqurrgbjediild',
-'uiajhxloaqtkmricvttgjmnngxebfsta',
-'cflknzrexhdnwgjhbstdsigbvxdkcdmt',
-'ufahumldfpqcrwcoeftrgrsctmhbuktg',
-'rpzzgexsacsahxghsrjxwtahxfqfdkfq',
-'pagcebydqoduolwywarildggspvulxxd',
-'utkqvrizyduofmrtosmwtcjunqoyepaf',
-'douizujslqpphutwwsodrgqkhcxbevto',
-'pcxokskafecwswsjnfnqjwajuyoxurmf',
-'fdvnvfdklndbvseqjiuatokcottayhtl',
-'qdnzwsypkmvjqcbnccklovjrjhwscjqq',
-'podlwexqlwotiszylsnoesqpazxhwlqz',
-'mdikclqlkclffybdyybyxrrgblpvilvr',
-'qbsjktbbkkgmdiyxvvnwqvbptlavmzpr',
-'yghyfgnvousfybeibakgvyvchtfjswxt',
-'avbktcykbwzvwfiazjmvizlkotgnzkso',
-'azzpnkqlhsuvzejqrxvguqedniryzazd',
-'pfkydacqfajpevwlgnxzaihxtgsuoueh',
-'rtwmvvzhpegcsqpsnlwdcyasdudmrblk',
-'pykehzypwxxjuzuhyyhayrdakfxgaxjh',
-'maozeldtpbenrkuebmanedgxczdhzowj',
-'eqzqylubbaksqxwfwlizqwkltdqlnvvc','syqlsligtpbyscwiybhucqtznapqdpkv',
-'cqyzuzhvbxafokjltayiawncwwjnupxq',
-'xwcrlgswifeplrqdbyjpoqekjahimmta',
-'yyjbrwmcefxlsybzdhrcqdijtjqxyhuh',
-'jvrjoyiuwmxsbnyipyoiixngehessukc',
-'hfxjgaindpelogagdjeafcoqnufzprrs',
-'dbbcnxagafcwftjwdztemyfdwudgqnbv',
-'qukyzqqmnuwepsbnxjrzthhjclmuusbl',
-'vsstbkgzapnrxjlekcqgfzlofiqvnect',
-'pdainianysarwfkjpnmxihfscrztkccm',
-'wcifumalbttkrcxqtbfauxpaxopxxdrv',
-'ejozzrwsmzbmlpyqugwjvtdhbqpiahhc',
-'imftdkywerrorckkmvleaiablkiyvqns',
-'mavzsixnoagwropoafgtycgjixitukwv',
-'nzgpjavyunppfufzzcgefwspqzfotbxl',
-'xfcxmgsqqwlpztedeqnzgtslfaagscau',
-'drwsqrpxupwfpzwnjydpzpfxzmiuhbsf',
-'bwvqpulqecwssayotmovyppqmkuhshya',
-'uhuejjalacoqvkicjjschfeeoyetdbsz',
-'iipppiofkgpmxemrtpjkkehbilpwtcek',
-'mkkwvmnlnhwjeekhertkxysbfsdwuejd',
-'jlyirgetdomvvifvxykmvehclxycnzmt',
-'txofoiqnvehnlvibpqaiorxzuqukmtub',
-'arqwpvtbnmdojnobivdyfkfxtrlbovrf',
-'pzszsvassgnkbffmsbtsqfyqkdshihxf',
-'qycuszizclhajpnjenrfavtwsdabpmvx',
-'emndiddrqnrgysigshegncpylagspqqa',
-'qkejmoscleklrretexmufeznpqvjqjjx',
-'lmrbxwllmaaysblpylvisugopxwooqyl',
-'ucrmxjcegbvaquhfiyjpfqdwamscpber',
-'xholbwbbgxyxlrvudteqblumchrmptod',
-'zerfnehemgwkqtzlrqecgmjiceqdvdwz',
-'nietaietdfojgkbvxfxvyhirtlcrcniw','rrjyvifegykeittrgfbaxuhjsyhdpqnb',
-'ltjnhkoqwhfokrxlcwctuzciegffeftl',
-'kceddjdfqdhfveaxqajlayqpazthzwvv',
-'zoomigswnxkknotosmewlypbjunybsex',
-'yfjnjypdszusbdmkpmdnfogxwqjvtwmp',
-'hsjhdzcwhoiamltcuceksvoswkeuxyyk',
-'ghfhyegoptbxajkbgjpiadvcxvxrfdkq',
-'oiyxkdjddaobicoetugkmerqjilmkbmk',
-'bgywiarfyerbytjrnfhtwgeyxucwtjkp',
-'vvhrrwxnpcqiwqkooorgcjkfezydidps',
-'zvniapuxoheejhqdumaeqcjbpowxkdvt',
-'prgfzjhmudrvgfkwmurszjiqnrtwyrkd',
-'eydsbgcdxlaqhierhohtpfrdyhewfwuk',
-'anfevydpnlbqzapohrfvqlyrvigvjrvu',
-'hoshqqeegfqnzxmyajmcurscwgfqwlhc',
-'unzgbbcxmmshtehoovttihotcwezuwqa',
-'jxsachfghjkcmrfbnoxgenvllpdxagxg',
-'eoxiolfivrexzisdxymflenbiaozazto',
-'ybaebiplohynkzehgtnqvbdqybwvdlfs',
-'fkalnkojsxsaxicdrdqlvmnerrmjbvcn',
-'nbudmhmppgwnacipfhlzebjsxatezowt',
-'vfezequaexscjgzdwxbfdbegrmwsnzjh',
-'zkbegazamxvmhyjrivmcelqjyjyagjku',
-'offqrwawbsnwempjdisinxkgcxcfrkwi',
-'pddfmjxvcualgbzkjsawpgzwsocpyhpn',
-'wuorznovyekfgqhckjsnxaixbusbjkil',
-'dposmavopcylpqybsoyjwnowtmezgjqm',
-'zluhouiitprzjhszdfxpnjucreqffwrp',
-'swffretdtzlkarjamklsuqpwoiupbonq',
-'ryfqzdjncjmcnrtoucskjcuhkxmlpinw',
-'cuqthfprxvleaasuhcbdzjastbbpezci',
-'wylrhxasmhglnbdofrjnqbhjbgqjyusp',
-'uvrgehomkzkxvecuzthvnaqjkkipdqft',
-'irnsoorxkjjxnvtmxkpukiyxeurmfiki',
-'hfqkerflyisgnoexwntnaptiwibixzpr',
-'zlhaqvndlfdxixjyekiabvaixdqkgdjl',
-'nngcqdvzgayxzsqbrneaxtlgrtklfqne',
-'euwchfomxdaydowcgicasxitefbkyymw',
-'wnpdrybrmheqdjlvniriiehjuqbpxryf',
-'nraxtwkofonorfjczkqnpyzvkidnqlzr',
-'ddbcaikpdeqjwltubfjudtocsvmrjhuo',
-'mwdtbydzwifwtxgmvieohliovkdcekmn',
-'gbypnodibkzewnazdxvuufsjlpprriak',
-'dagliziwqcpsmeiairbbduwxokkybohu',
-'ddnkiyrkkzdxdmqzdgrcpfeimioitzbn',
-'gkzpreyhrfgiifybiiugardzlyhqnotf',
-'qwfaifbrohntoddccbkkolzebefpprxr',
-'wrjksrsvzpygfbfjmbsxqfuhenpleanj',
-'ccjipaettmkqilerjtxrujymyaegikzv',
-'xrkuhvqnjxqiklmszidvondfttjkdhko','pwflxrtofkzjtketqaemovkzfmhzdkym',
-'jfhemwlmuifnvwciqebhzwjeulpxzbqe',
-'kodhqxlwcycnrtsyarihpeuzewcuojgz',
-'lyhlrhpwwguybtvqvgjmmfgffqzgmpdv',
-'nqtuuwxvvscxbshdunwxcumfvballsyr',
-'cnlaovjutuqoknzdrmazjhkltwguxiai',
-'hnuiemidfruxajexkenkotratwdwafmv',
-'dednhewvqrgcnuacknlhsatrfgchpgah',
-'nrjcgvpubetvnxibsufcluvsevqtvszv',
-'qgjpiiowthdpnmckuimpdrqvgbagwfob',
-'xsbpcfguxymlnidykgmakgkouopahuux',
-'gxdmmerfasglcnffezbrupbragihwnqu',
-'lbiruhaqmowrutcirvdwzxcbuaezjwev',
-'znebvraljlbakkvavrsuzenltsdkufio',
-'zcarbchdehvcginpkfjcrewdsrsdavrx',
-'iudasxsvtiommpbfqfffgbjwbranlwys',
-'azlhiwysbdljknmczwgnebcsnooifncs',
-'ctbfzxthsygxvygglvtogvbbvokkhcjm',
-'thupbmgaznagatlryaopfcoaganbuobx',
-'hhzzcongglxqwgxmnrxqameykfcbvrbg',
-'fypssnnwbnrkrsgpisulowgpdirselcj',
-'jsypsnsiaxhhljxtbnqubgpqrerdszzy',
-'obguhrfrdgxfsyrmkgenlksiarsfvnku',
-'vkjmkghtodxuutirexpudyzpfgfrsydn',
-'twnruwxvhruplwnzmmdfjcbluxcgkeih',
-'gljovwnttigksbgxfbdsjavveqpmkhsm',
-'fjkxztjyttcnvulegtfretanqttccxcb',
-'ngsuwcpqyzjxpxmorzdipqhlddwwjhgz',
-'bxtjvhzdhzeqmlqeqprzgveavigcoggu',
-'hzmlirhzaagwdvvpzrlsgwgrsingeiei',
-'bnvloctykvidqtqrkuolrgfbgmuoxstg',
-'aqgygvzdpfhjqifppxtsjgglkojvsgkt',
-'uqdfgbsfvtvebvkkygjgdebxqvgrathw',
-'vnlprrbnegqcetuopcacvniiqbyslexs',
-'oclsgxtcjhkvzwczgkjcpgrvrpygaqxn',
-'uvdostudztexiricjuibdwomnehwbica',
-'inslxhmkwdwweosayolvqatvqhouzhjd',
-'uakmhhtwlivvpttfsigruyswzwauuayk',
-'evprnlwazhdbxtwkjdpwdbdooeinnxbe',
-'yvswazrrrskvcjjklcasourlfmceovta',
-'kyaukaclujxslnlxcaulhyifztxpvbui',
-'wmxiodsrsympgsdntjmwwdmvnakawxyu',
-'uraihmwqqddztttffffzzoypeasatavv',
-'yholopbqxbxlilepkoxaboqmqkxvjldx',
-'nmcugznktzobujcckgasgawceemkzvkv',
-'cqsxgprzfkvpjkdymgqtsjvqcdhdkncv',
-'auioypitkdexssmdykxivqelacofjmlj',
-'lohfielmwqqnmtbtncqyzciozchxmqqy',
-'pmzlevlpaxlcltfvqsiwmncellqkisyy',
-'tfrqswqecnpnciltgrsaqtxmpcueysgi','trimqqayiizwoqvxyufsnbawpywlzaeg',
-'ynfxeufgfyrzfnttbjsqocgcgjxmddeg',
-'zbrbzfqikzuncbcqddzzeikxctjmsuvi',
-'fpudvoioujhwhltffeggbzjopspgusuq',
-'wrgkzjpeflpsrvugwockchtnmacetpqp',
-'mooxdxjzrzjarfuserdqnzewnvhrkhki',
-'pwbnyxntdogqxwqqngwgyneerengnmxg',
-'ngcyrifvkdjvvcuflgcbvbrsikgfzdtw',
-'bhbidymctoqelcvzeyfrcoplexjnkpdh',
-'znqqynizgwvduwgfucnuexcgrvkttrgd',
-'bisvumatpwfvoympeofsmsgauabdlsls',
-'zifashukopettkayevpkxvtiofjodcqb',
-'dqumtiszzrxblzhfjjqkyujkpynljfut',
-'qlazmojeuqjzoxijkituoqaehkdgmyor',
-'qgpdhfmdhpmmjszzgffrnwmghqsbntvb',
-'pafecsigugdvomncvhbvhnwgkesblnhl',
-'apobeydwhsfojlzwbmininsekkshzvzx',
-'yvzsucrxedsbwxqndefhwnmgtstobgrn',
-'avvngtsmuzwtfujrqmkpbunfqnvfkowd',
-'njgmexxhxyliwkchkqhnvfruoguewvls',
-'ykeucajipkxilgsjolymliaaxawbqoyx',
-'mxeoluvdsmlaeqefvwwpliqdxjephlgo',
-'xilvxqngcxothxpzbnajiwhzebfpblfh',
-'mpfhsmdqzpsgvrxktocxhppszqxdqdju',
-'zqhrabnwitfcjrzstlxfothhuggbvegj',
-'seemxufphxoefhznhewegzxowsmzsnga',
-'vbbsgoferlvhbtruklzlvmfmkusaflrw',
-'ocryetqtqixnndgenhdpscqorafvlidp',
-'gwnulgsbqqdedkzwchtrihpgwdkjdlrg',
-'gguiuopvvokqyklkxqogawegcocapdjw',
-'tehecaxgjtytgthjztueeympoxsveovm',
-'mpqppckjywertolycwpwblolinrvbpus',
-'jzbnwobxrumiavemtelyaninkkwcraqy',
-'zaxbsllwgaxnquvtmdjwnpxuioxgdzjw',
-'gsjrxgawizxbwhcxesvqzjnptjkpnpkm',
-'xfjdepmmmbpalaiiuryypskugmsofpqs',
-'xjowzsvnbnejliqenrqnhuzmsgqboxoi',
-'ywgeufujshygtimstzqkdsgmmtjskcpa',
-'fsoixdthatddsqpovblzhdvnckwlhyxh',
-'awikcdblxdosnqcrmbkvuehqlsjloyau',
-'vkdqqanatqmwgunviptxodqwbysovsal',
-'fyesccgnckmcviesdtnmronlzqxsryir',
-'qsescpvrdnsvoqylqfqadfnpsjivqfua',
-'maymyxleoqjquauxacpgeykjeoqflgns',
-'rsqfcbydvbvrumbhwhyhwwhtzscoivqq',
-'bttvswrcjidqqrguwjpiodkxiwitylan',
-'oyosvjstwobpdjajxmphjhnmvephhejv',
-'aymejiaepqrastcptjxesfgaedkchlac',
-'ldqjbyvjipynekiwnsoofnswasicwwiy',
-'lhdwkgizdrxmourluakkcqnarzaqwtqc']
-
-bot = ClientRubika('CipherX') # چیزی نزارید داخلش
-
-async def main():
- for authX in auth:
- try:
- status = bot.online(authX)
- prints(f"""
-=========================\n
-{status}\n
-AUTH => "{authX}"\n
-
-""")
- except:
- pass
-
-asyncio.run(main())
-
-
-
-
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/gottbert/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/gottbert/README.md
deleted file mode 100644
index 1d58feb279a4a50222290546c3bb285d3cea98e6..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/gottbert/README.md
+++ /dev/null
@@ -1,64 +0,0 @@
-# GottBERT: a pure German language model
-
-## Introduction
-
-[GottBERT](http://arxiv.org/abs/2012.02110) is a pretrained language model trained on 145GB of German text based on RoBERTa.
-
-## Example usage
-
-### fairseq
-##### Load GottBERT from torch.hub (PyTorch >= 1.1):
-```python
-import torch
-gottbert = torch.hub.load('pytorch/fairseq', 'gottbert-base')
-gottbert.eval() # disable dropout (or leave in train mode to finetune)
-```
-
-##### Load GottBERT (for PyTorch 1.0 or custom models):
-```python
-# Download gottbert model
-wget https://dl.gottbert.de/fairseq/models/gottbert-base.tar.gz
-tar -xzvf gottbert.tar.gz
-
-# Load the model in fairseq
-from fairseq.models.roberta import GottbertModel
-gottbert = GottbertModel.from_pretrained('/path/to/gottbert')
-gottbert.eval() # disable dropout (or leave in train mode to finetune)
-```
-
-##### Filling masks:
-```python
-masked_line = 'Gott ist ! :)'
-gottbert.fill_mask(masked_line, topk=3)
-# [('Gott ist gut ! :)', 0.3642110526561737, ' gut'),
-# ('Gott ist überall ! :)', 0.06009674072265625, ' überall'),
-# ('Gott ist großartig ! :)', 0.0370681993663311, ' großartig')]
-```
-
-##### Extract features from GottBERT
-
-```python
-# Extract the last layer's features
-line = "Der erste Schluck aus dem Becher der Naturwissenschaft macht atheistisch , aber auf dem Grunde des Bechers wartet Gott !"
-tokens = gottbert.encode(line)
-last_layer_features = gottbert.extract_features(tokens)
-assert last_layer_features.size() == torch.Size([1, 27, 768])
-
-# Extract all layer's features (layer 0 is the embedding layer)
-all_layers = gottbert.extract_features(tokens, return_all_hiddens=True)
-assert len(all_layers) == 13
-assert torch.all(all_layers[-1] == last_layer_features)
-```
-## Citation
-If you use our work, please cite:
-
-```bibtex
-@misc{scheible2020gottbert,
- title={GottBERT: a pure German Language Model},
- author={Raphael Scheible and Fabian Thomczyk and Patric Tippmann and Victor Jaravine and Martin Boeker},
- year={2020},
- eprint={2012.02110},
- archivePrefix={arXiv},
- primaryClass={cs.CL}
-}
-```
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quantization/scalar/utils.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quantization/scalar/utils.py
deleted file mode 100644
index 2ec6af3fcb09ccaf853be15a84ed8181f9e2f546..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quantization/scalar/utils.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-from operator import attrgetter
-
-import torch.distributed as dist
-import torch.nn as nn
-
-from ..pq.utils import attrsetter, get_layers
-from .modules import ActivationQuantizer, IntConv2d, IntEmbedding, IntLinear
-
-
-MAPPING = {nn.Linear: IntLinear, nn.Embedding: IntEmbedding, nn.Conv2d: IntConv2d}
-
-
-def quantize_model_(model, p=0.2, bits=8, update_step=3000, method="histogram", remove_weights=False):
- """
- Replaces all modules with their scalar quantized counterpart and
- registers hooks to quantize the post-ativations of those modules.
-
- Args:
- - model: a nn.Module
- - p: amount of noise (0 for no noise, 1 to quantize all the weights/activations)
- - bits: number of bits
- - update_step: update quantization parameters every update_step steps
- """
- # quantize all layers
- # remove weights indicates whether the weights extension should be removed, in addition to
- # weight_orig and weight extension on names
- quantized_layers = get_layers(model, "(.*?)", remove_weights=remove_weights)
-
- for layer in quantized_layers:
-
- # book-keeping
- is_master_process = (not dist.is_initialized()) or (
- dist.is_initialized() and dist.get_rank() == 0
- )
-
- # recover module
- module = attrgetter(layer)(model)
- if is_master_process:
- logging.info(
- f"Quantizing layer {layer} with bits={bits} and QuantNoise={p}"
- )
-
- # quantization params
- q_params = {
- "p": p,
- "update_step": update_step,
- "bits": bits,
- "method": method,
- "counter": 0,
- }
-
- # instantiate the quantized counterpart
- if isinstance(module, tuple(MAPPING.keys())):
- QuantizedModule = MAPPING[module.__class__]
- quantized_module = QuantizedModule.__new__(QuantizedModule)
- params = module.__dict__
- params.update(q_params)
- quantized_module.__dict__.update(params)
-
- else:
- if is_master_process:
- logging.info(f"Module {module} not yet supported for quantization")
- continue
-
- # activation quantization
- a_q = ActivationQuantizer(quantized_module, p=0, bits=bits, method=method)
-
- # replace layer by its quantized counterpart
- attrsetter(layer)(model, quantized_module)
-
- # return name of quantized layers
- return quantized_layers
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_ema.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_ema.py
deleted file mode 100644
index 88ea65a434e49775d40f2b08ce6df0f8d9929c18..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_ema.py
+++ /dev/null
@@ -1,199 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import unittest
-from copy import deepcopy
-from dataclasses import dataclass
-from typing import Optional
-
-import torch
-from fairseq.models.ema import EMA
-
-
-class DummyModule(torch.nn.Module):
- def __init__(self) -> None:
- """LightningModule for testing purposes
-
- Args:
- epoch_min_loss_override (int, optional): Pass in an epoch that will be set to the minimum
- validation loss for testing purposes (zero based). If None this is ignored. Defaults to None.
- """
- super().__init__()
- self.layer = torch.nn.Linear(in_features=32, out_features=2)
- self.another_layer = torch.nn.Linear(in_features=2, out_features=2)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.layer(x)
- return self.another_layer(x)
-
-
-@dataclass
-class EMAConfig(object):
- ema_decay: float = 0.99
- ema_start_update: int = 0
- ema_fp32: bool = False
- ema_seed_model: Optional[str] = None
-
-
-class TestEMAGPU(unittest.TestCase):
- def assertTorchAllClose(self, x, y, atol=1e-8, rtol=1e-5, msg=None):
- diff = x.float() - y.float()
- diff_norm = torch.norm(diff)
- other_norm = torch.norm(y.float())
-
- if msg is None:
- msg = "|input - other| > {} + {} * |other|".format(
- atol, rtol
- )
-
- self.assertLessEqual(
- diff_norm,
- atol + rtol * other_norm,
- msg=msg,
- )
-
- def test_ema(self):
- model = DummyModule()
- optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
- state = deepcopy(model.state_dict())
- config = EMAConfig()
- ema = EMA(model, config)
-
- # set decay
- ema._set_decay(config.ema_decay)
- self.assertEqual(ema.get_decay(), config.ema_decay)
-
- # get model
- self.assertEqual(ema.get_model(), ema.model)
-
- # Since fp32 params is not used, it should be of size 0
- self.assertEqual(len(ema.fp32_params), 0)
-
- # EMA step
- x = torch.randn(32)
- y = model(x)
- loss = y.sum()
- loss.backward()
- optimizer.step()
-
- ema.step(model)
-
- ema_state_dict = ema.get_model().state_dict()
-
- for key, param in model.state_dict().items():
- prev_param = state[key]
- ema_param = ema_state_dict[key]
-
- if "version" in key:
- # Do not decay a model.version pytorch param
- continue
- self.assertTorchAllClose(
- ema_param,
- config.ema_decay * prev_param + (1 - config.ema_decay) * param,
- )
-
- # Since fp32 params is not used, it should be of size 0
- self.assertEqual(len(ema.fp32_params), 0)
-
- # Load EMA into model
- model2 = DummyModule()
- ema.reverse(model2)
-
- for key, param in model2.state_dict().items():
- ema_param = ema_state_dict[key]
- self.assertTrue(
- torch.allclose(ema_param, param)
- )
-
- def test_ema_fp32(self):
- model = DummyModule().half()
- optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
- state = deepcopy(model.state_dict())
- config = EMAConfig(ema_fp32=True)
- ema = EMA(model, config)
-
- x = torch.randn(32)
- y = model(x.half())
- loss = y.sum()
- loss.backward()
- optimizer.step()
-
- ema.step(model)
-
- for key, param in model.state_dict().items():
- prev_param = state[key]
- ema_param = ema.get_model().state_dict()[key]
-
- if "version" in key:
- # Do not decay a model.version pytorch param
- continue
- self.assertIn(key, ema.fp32_params)
-
- # EMA update is done in fp32, and hence the EMA param must be
- # closer to the EMA update done in fp32 than in fp16.
- self.assertLessEqual(
- torch.norm(
- ema_param.float() -
- (config.ema_decay * prev_param.float() + (1 - config.ema_decay) * param.float()).half().float()
- ),
- torch.norm(
- ema_param.float() -
- (config.ema_decay * prev_param + (1 - config.ema_decay) * param).float()
- ),
- )
- self.assertTorchAllClose(
- ema_param,
- (config.ema_decay * prev_param.float() + (1 - config.ema_decay) * param.float()).half(),
- )
-
- def test_ema_fp16(self):
- model = DummyModule().half()
- optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
- state = deepcopy(model.state_dict())
- config = EMAConfig(ema_fp32=False)
- ema = EMA(model, config)
-
- # Since fp32 params is not used, it should be of size 0
- self.assertEqual(len(ema.fp32_params), 0)
-
- x = torch.randn(32)
- y = model(x.half())
- loss = y.sum()
- loss.backward()
- optimizer.step()
-
- ema.step(model)
-
- for key, param in model.state_dict().items():
- prev_param = state[key]
- ema_param = ema.get_model().state_dict()[key]
-
- if "version" in key:
- # Do not decay a model.version pytorch param
- continue
-
- # EMA update is done in fp16, and hence the EMA param must be
- # closer to the EMA update done in fp16 than in fp32.
- self.assertLessEqual(
- torch.norm(
- ema_param.float() -
- (config.ema_decay * prev_param + (1 - config.ema_decay) * param).float()
- ),
- torch.norm(
- ema_param.float() -
- (config.ema_decay * prev_param.float() + (1 - config.ema_decay) * param.float()).half().float()
- ),
- )
- self.assertTorchAllClose(
- ema_param,
- config.ema_decay * prev_param + (1 - config.ema_decay) * param,
- )
-
- # Since fp32 params is not used, it should be of size 0
- self.assertEqual(len(ema.fp32_params), 0)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/HawkEye098432/Vocals_seperator/README.md b/spaces/HawkEye098432/Vocals_seperator/README.md
deleted file mode 100644
index 6eaae12be7a215a065884e90bc0a9bd9d2f9e962..0000000000000000000000000000000000000000
--- a/spaces/HawkEye098432/Vocals_seperator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Vocals Seperator
-emoji: 🏢
-colorFrom: yellow
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/HgMenon/Transcribe_V0.2/tests/vad_test.py b/spaces/HgMenon/Transcribe_V0.2/tests/vad_test.py
deleted file mode 100644
index b465d8a380f9316a6830d9aac320c85f22aba0a0..0000000000000000000000000000000000000000
--- a/spaces/HgMenon/Transcribe_V0.2/tests/vad_test.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import pprint
-import unittest
-import numpy as np
-import sys
-
-sys.path.append('../whisper-webui')
-
-from src.vad import AbstractTranscription, TranscriptionConfig, VadSileroTranscription
-
-class TestVad(unittest.TestCase):
- def __init__(self, *args, **kwargs):
- super(TestVad, self).__init__(*args, **kwargs)
- self.transcribe_calls = []
-
- def test_transcript(self):
- mock = MockVadTranscription()
-
- self.transcribe_calls.clear()
- result = mock.transcribe("mock", lambda segment : self.transcribe_segments(segment))
-
- self.assertListEqual(self.transcribe_calls, [
- [30, 30],
- [100, 100]
- ])
-
- self.assertListEqual(result['segments'],
- [{'end': 50.0, 'start': 40.0, 'text': 'Hello world '},
- {'end': 120.0, 'start': 110.0, 'text': 'Hello world '}]
- )
-
- def transcribe_segments(self, segment):
- self.transcribe_calls.append(segment.tolist())
-
- # Dummy text
- return {
- 'text': "Hello world ",
- 'segments': [
- {
- "start": 10.0,
- "end": 20.0,
- "text": "Hello world "
- }
- ],
- 'language': ""
- }
-
-class MockVadTranscription(AbstractTranscription):
- def __init__(self):
- super().__init__()
-
- def get_audio_segment(self, str, start_time: str = None, duration: str = None):
- start_time_seconds = float(start_time.removesuffix("s"))
- duration_seconds = float(duration.removesuffix("s"))
-
- # For mocking, this just returns a simple numppy array
- return np.array([start_time_seconds, duration_seconds], dtype=np.float64)
-
- def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, duration: float):
- result = []
-
- result.append( { 'start': 30, 'end': 60 } )
- result.append( { 'start': 100, 'end': 200 } )
- return result
-
-if __name__ == '__main__':
- unittest.main()
\ No newline at end of file
diff --git a/spaces/Hina4867/bingo/src/components/chat-message.tsx b/spaces/Hina4867/bingo/src/components/chat-message.tsx
deleted file mode 100644
index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000
--- a/spaces/Hina4867/bingo/src/components/chat-message.tsx
+++ /dev/null
@@ -1,93 +0,0 @@
-import remarkGfm from 'remark-gfm'
-import remarkMath from 'remark-math'
-import supersub from 'remark-supersub'
-import remarkBreaks from 'remark-breaks'
-import { cn } from '@/lib/utils'
-import { CodeBlock } from '@/components/ui/codeblock'
-import { MemoizedReactMarkdown } from '@/components/markdown'
-import { LearnMore } from './learn-more'
-import { ChatMessageModel } from '@/lib/bots/bing/types'
-import { useEffect } from 'react'
-import { TurnCounter } from './turn-counter'
-
-export interface ChatMessageProps {
- message: ChatMessageModel
-}
-
-export function ChatMessage({ message, ...props }: ChatMessageProps) {
- useEffect(() => {
- if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) {
- window.scrollBy(0, 200)
- }
- }, [message.text])
-
- return message.text ? (
-