diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Create Realistic and Immersive Environments with World Creator 2 - Download for Free.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Create Realistic and Immersive Environments with World Creator 2 - Download for Free.md
deleted file mode 100644
index 0c2bd9b2230dd0d48f6b8b4f6330c05aaef6944d..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Create Realistic and Immersive Environments with World Creator 2 - Download for Free.md
+++ /dev/null
@@ -1,45 +0,0 @@
-
-
How to Download World Creator 2 for Free and Create Stunning Terrains
-
-
World Creator 2 is a powerful terrain and landscape generator that allows you to create realistic and immersive environments in real-time. Whether you are a game developer, a filmmaker, or an artist, World Creator 2 can help you bring your vision to life with its advanced features and tools.
-
-
In this article, we will show you how to download World Creator 2 for free and how to use it to create amazing terrains using real-world data from MapTiler Cloud.
World Creator 2 is the world's first real-time terrain and landscape generator that performs all its generation and design processes entirely on the GPU using thousands of cores. It offers a highly optimized and improved workflow with more tools and features than its predecessor, World Creator 1.
-
-
Some of the key features of World Creator 2 are:
-
-- Real-time terrain generation: You can create terrains from scratch by hand, use existing terrains to stamp your world, or combine both workflows to get what you want. There are no design limitations - everything is possible, and you are in complete control.
-- Outstanding procedural power: You can apply and combine many different kinds of filters to modify the terrain you created or imported from another source. You can erode, create rivers and lakes, apply sediments, transform, stylize, simulate water flow and sediment transport as well as sediment deposit, and much more entirely in real-time.
-- Powerful design capabilities: You can draw anything, anytime, anywhere on your terrain. You can create roads, rivers, lakes, plateaus, terraces, raise mountains, and more - or just draw the shape you want by hand or use custom height-maps and real-world data to stamp your terrain.
-- Real-world maps integration: You can use real-world 3D DEM data from MapTiler Cloud to create realistic terrains based on any location on Earth. MapTiler Cloud provides high-quality elevation data for the whole world that you can easily import into World Creator 2.
-
-
How to Download World Creator 2 for Free?
-
-
World Creator 2 is a commercial software that requires a license to use. However, there is a way to download World Creator 2 for free and use it without any limitations. Here are the steps:
-
-- Go to the official website of World Creator 2: https://www.world-creator.com/
-- Click on the "Download" button at the top right corner of the page.
-- Choose the version that suits your operating system (Windows or Mac) and click on the "Download" button again.
-- You will be redirected to a page where you can enter your email address and name to receive a download link.
-- Check your email inbox for an email from World Creator with the subject "World Creator Download Link".
-- Click on the link in the email to start downloading World Creator 2.
-- Once the download is complete, unzip the file and run the installer.
-- Follow the instructions on the screen to install World Creator 2 on your computer.
-- When the installation is finished, launch World Creator 2 from your desktop or start menu.
-- You will be asked to enter a license key or activate a trial version. Choose the trial version option and click on "Activate".
-- You will be able to use World Creator 2 for free for 30 days with all its features unlocked.
-
-
How to Use World Creator 2 to Create Stunning Terrains?
-
-
Now that you have downloaded World Creator 2 for free, you can start creating amazing terrains with it. Here are some basic steps to get you started:
-
-- Open World Creator 2 and choose a project template or create a new project from scratch.
-- In the project settings panel, you can adjust various parameters such as terrain size, resolution, seed, biome type, etc.
-- In the terrain editor panel, you can use different tools and filters to sculpt and modify your terrain. You can also import height-maps or real-world data from MapTiler Cloud to stamp your terrain with realistic features.
-- In the texture editor panel, you can apply different materials and textures to your terrain. You can also blend multiple textures using ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Blackfridaybookhussainzaidipdffreedownload.md b/spaces/1gistliPinn/ChatGPT4/Examples/Blackfridaybookhussainzaidipdffreedownload.md
deleted file mode 100644
index 6a7de81b69a15a7af92d344c17bdf7fc90b1d542..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Blackfridaybookhussainzaidipdffreedownload.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Black Friday Book Hussain Zaidi Pdf Free 33 - Yola Black Friday The True Story Of Bombay Bomb Blasts S ... How to Read a Bomb: Scenes ... 4d29de3e1b
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Blu Hey Bro 1080p Telugu Movies The Best Comedy of the Year.md b/spaces/1gistliPinn/ChatGPT4/Examples/Blu Hey Bro 1080p Telugu Movies The Best Comedy of the Year.md
deleted file mode 100644
index c140e32869a851ec99e672cdc917bae90967138e..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Blu Hey Bro 1080p Telugu Movies The Best Comedy of the Year.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Buzzsaw 2011 32 Bit Keygen Free The Easiest Way to Install and Activate Buzzsaw.md b/spaces/1gistliPinn/ChatGPT4/Examples/Buzzsaw 2011 32 Bit Keygen Free The Easiest Way to Install and Activate Buzzsaw.md
deleted file mode 100644
index f1a77f44b65d582daa8d263ea72f8cf127966f7a..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Buzzsaw 2011 32 Bit Keygen Free The Easiest Way to Install and Activate Buzzsaw.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
Since 2011 winter season, frost depths have been measured as an outreach program in Hokkaido, northern part of Japan, where seasonal ground freezing occurs in winter. Frost depths were measured in elementary, junior high and high schools in order to emphasis their interest for earth sciences. At schools, using simple frost tube, measurements were conducted directly once a week by students or teacher during ground freezing under no snow-removal condition. A lecture was made in class and a frost tube was set at schoolyard, as the same tube and protocol as UAF's Permafrost Outreach Program, using clear tube with blue-colored water. In 2011 winter season, we started measurements at three schools, and the number of school extended to 32 in 2016 season, 26 elementary schools, 5 junior high schools and one high school. We visited schools in summer time or just before frost season to talk about the method of measurement, and measurements by students started just after ground freezing. After the end of frozen period, we visited schools again to explain results of each school or another schools in Japan, Alaska, Canada or Russia. The measured frost depths in Hokkaido ranged widely, from only a few centimeter to more than 50 cm. However, some schools had no frost depth due to heavy snow. We confirmed that the frost depth strongly depends on air temperature and snow depth. The lecture was made to student why the frost depth ranged widely, and the effect of snow was explained by using the example of igloo. In order to validate the effect of snow and to compare frost depths, we tried to measure frost depths under snow-removal and no snow-removal conditions at the same elementary school. At the end of December, depths had no significant difference between these conditions, and the difference went to 14 cm after one month, with about 30 cm of snow depth. After these measurements and lectures, students noticed snow has a role as insulator and affects the frost depth.
-
In order to emphasis their interest for earth sciences, an outreach program through measurements of frost depth is conducting in Japan since 2011. This program is made at elementary, junior high and high schools in Hokkaido, northern part of Japan where seasonal ground freezing occurs in winter. At schools, a lecture was made and a frost tube was set at schoolyard, as the same tube and protocol as UAF's Permafrost Outreach Program, using clear tube with blue-colored water. Frost depth was measured directly once a week at each school by students during ground freezing under no snow-removal condition. In 2011 season, we started this program at three schools, and the number of participated school is extended to 29 schools in 2014 winter season, 23 elementary schools, 5 junior high schools and one high school. We visited schools summer time and just before frost season to talk about the method of measurement. After the end of measured period, we also visited schools to explain measured results by each school and the other schools in Japan, Alaska, Canada and Russia. The measured values of frost depth in Hokkaido were ranged between 0cm and more than 50cm. We found that the frost depth depends on air temperature and snow depth. We discussed with student why the frost depth ranged widely and explained the effect of snow by using the example of igloo. In order to validate the effect of snow and to compare frost depths, we tried to measure frost depths under snow-removal and no snow-removal conditions at one elementary school. At the end of December, depths had no significant difference between these conditions, 11cm and 10cm, and the difference went to 14cm, 27cm and 13cm after one month, with about 30cm of snow depth. After these measurements and lectures, students noticed snow has a role as insulator and affects the frost depth. The network of this program will be expected to expand, finally more than a hundred schools.
Spring frost can be a limiting factor in sweet cherry ( Prunus avium L.) production. Rising temperatures in spring force the development of buds, whereby their vulnerability to freezing temperatures continuously increases. With the beginning of blossom, flowers can resist only light frosts without any significant damage. In this study, we investigated the risk of spring frost damages during cherry blossom for historical and future climate conditions at two different sites in NE (Berlin) and SW Germany (Geisenheim). Two phenological models, developed on the basis of phenological observations at the experimental sweet cherry orchard in Berlin-Dahlem and validated for endodormancy release and for warmer climate conditions (already published), were used to calculate the beginning of cherry blossom in Geisenheim, 1951-2015 (external model validation). Afterwards, on the basis of a statistical regionalisation model WETTREG (RCP 8.5), the frequency of frost during cherry blossom was calculated at both sites for historical (1971-2000) and future climate conditions (2011-2100). From these data, we derived the final flower damage, defined as the percentage of frozen flowers due to single or multiple frost events during blossom. The results showed that rising temperatures in this century can premature the beginning of cherry blossom up to 17 days at both sites, independent of the used phenological model. The frequency and strength of frost was characterised by a high temporal and local variability. For both sites, no significant increase in frost frequency and frost damage during blossom was found. In Geisenheim, frost damages significantly decreased from the middle of the twenty-first century. This study additionally emphasises the importance of reliable phenological models which not only work for current but also for changed climate conditions and at different sites. The date of endodormancy release should always be a known parameter in chilling/forcing models.
-
Spring frost can be a limiting factor in sweet cherry (Prunus avium L.) production. Rising temperatures in spring force the development of buds, whereby their vulnerability to freezing temperatures continuously increases. With the beginning of blossom, flowers can resist only light frosts without any significant damage. In this study, we investigated the risk of spring frost damages during cherry blossom for historical and future climate conditions at two different sites in NE (Berlin) and SW Germany (Geisenheim). Two phenological models, developed on the basis of phenological observations at the experimental sweet cherry orchard in Berlin-Dahlem and validated for endodormancy release and for warmer climate conditions (already published), were used to calculate the beginning of cherry blossom in Geisenheim, 1951-2015 (external model validation). Afterwards, on the basis of a statistical regionalisation model WETTREG (RCP 8.5), the frequency of frost during cherry blossom was calculated at both sites for historical (1971-2000) and future climate conditions (2011-2100). From these data, we derived the final flower damage, defined as the percentage of frozen flowers due to single or multiple frost events during blossom. The results showed that rising temperatures in this century can premature the beginning of cherry blossom up to 17 days at both sites, independent of the used phenological model. The frequency and strength of frost was characterised by a high temporal and local variability. For both sites, no significant increase in frost frequency and frost damage during blossom was found. In Geisenheim, frost damages significantly decreased from the middle of the twenty-first century. This study additionally emphasises the importance of reliable phenological models which not only work for current but also for changed climate conditions and at different sites. The date of endodormancy release should always be a known parameter in chilling/forcing models.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Barbie Dreamhouse Adventures APK MOD VIP Unlocked 2022.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Barbie Dreamhouse Adventures APK MOD VIP Unlocked 2022.md
deleted file mode 100644
index 7108a65fe4fdee4f52fd500bf9893bbe23e3ed28..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Barbie Dreamhouse Adventures APK MOD VIP Unlocked 2022.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-
Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK: How to Download and Play
-
If you are a fan of Barbie and her fabulous lifestyle, you might want to try out Barbie Dreamhouse Adventures, a fun simulation game for girls. In this game, you can create your own dreamhouse, design your own fashion, and join Barbie and her friends in various adventures. However, if you want to enjoy all the features and items in the game, you might need to spend some real money or watch ads. That's why some people prefer to use Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK, a modified version of the game that gives you everything for free. In this article, we will show you how to download and play this APK on your Android device.
Barbie Dreamhouse Adventures is a game developed by Budge Studios, a company that specializes in creating games for kids. The game is based on the popular animated series of the same name, which follows Barbie and her friends as they live in a glamorous dreamhouse. The game allows you to create your own dreamhouse, decorate it with furniture and accessories, and explore different rooms. You can also dress up Barbie and her friends with hundreds of outfits, hairstyles, and accessories. You can even design your own fashion and share it with other players.
-
Features of the game
-
Some of the features of Barbie Dreamhouse Adventures are:
-
-
You can customize your dreamhouse with wallpapers, furniture, decorations, and more.
-
You can join Barbie and her friends in various activities, such as baking, dancing, pool parties, pet care, and more.
-
You can unlock new items and characters as you progress in the game.
-
You can interact with other players and visit their dreamhouses.
-
You can watch episodes from the animated series and get inspired by them.
-
-
What is Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK?
-
A modified version of the game with everything unlocked
-
Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK is a modified version of the original game that gives you access to everything without paying or watching ads. This means that you can enjoy all the features and items in the game without any limitations. You can unlock all the rooms, furniture, outfits, accessories, characters, and activities in the game for free. You can also get unlimited coins and gems to buy anything you want.
-
barbie dreamhouse adventures mod apk vip unlocked 2022
-barbie dreamhouse adventures hack apk tudo liberado 2022
-barbie dreamhouse adventures download apk full unlocked 2022
-barbie dreamhouse adventures apk premium tudo gratis 2022
-barbie dreamhouse adventures latest version apk tudo infinito 2022
-barbie dreamhouse adventures free vip apk tudo ilimitado 2022
-barbie dreamhouse adventures cracked apk tudo sem limites 2022
-barbie dreamhouse adventures unlimited vip apk tudo de graça 2022
-barbie dreamhouse adventures pro apk tudo completo 2022
-barbie dreamhouse adventures modded apk tudo atualizado 2022
-barbie dreamhouse adventures vip hack apk tudo funcionando 2022
-barbie dreamhouse adventures unlocked apk tudo pronto 2022
-barbie dreamhouse adventures premium mod apk tudo desbloqueado 2022
-barbie dreamhouse adventures hack mod apk tudo liberado 2022
-barbie dreamhouse adventures full unlocked apk tudo gratis 2022
-barbie dreamhouse adventures latest mod apk tudo infinito 2022
-barbie dreamhouse adventures free premium apk tudo ilimitado 2022
-barbie dreamhouse adventures cracked mod apk tudo sem limites 2022
-barbie dreamhouse adventures unlimited premium apk tudo de graça 2022
-barbie dreamhouse adventures pro mod apk tudo completo 2022
-barbie dreamhouse adventures modded vip apk tudo atualizado 2022
-barbie dreamhouse adventures vip unlocked mod apk tudo funcionando 2022
-barbie dreamhouse adventures unlocked vip apk tudo pronto 2022
-barbie dreamhouse adventures vip mod apk download 2022
-barbie dreamhouse adventures hack apk download 2022
-barbie dreamhouse adventures download mod apk 2022
-barbie dreamhouse adventures apk download vip unlocked 2022
-barbie dreamhouse adventures premium apk download 2022
-barbie dreamhouse adventures latest version apk download 2022
-barbie dreamhouse adventures free vip apk download 2022
-barbie dreamhouse adventures cracked apk download 2022
-barbie dreamhouse adventures unlimited vip apk download 2022
-barbie dreamhouse adventures pro apk download 2022
-barbie dreamhouse adventures modded apk download 2022
-barbie dreamhouse adventures vip hack apk download 2022
-barbie dreamhouse adventures unlocked apk download 2022
-how to get vip unlocked in barbie dreamhouse adventures apk 2022
-how to hack barbie dreamhouse adventures apk tudo liberado 2022
-how to download barbie dreamhouse adventures full unlocked apk 2022
-how to get premium for free in barbie dreamhouse adventures apk 2022
-how to update barbie dreamhouse adventures to latest version apk 2022
-how to get free vip in barbie dreamhouse adventures hack apk 2022
-how to crack barbie dreamhouse adventures vip unlocked apk 2022
-how to get unlimited vip in barbie dreamhouse adventures modded apk 2022
-how to install barbie dreamhouse adventures pro unlocked apk 2022
-how to play barbie dreamhouse adventures with vip modded apk 2022
-how to use vip features in barbie dreamhouse adventures hacked apk 2022
-how to unlock everything in barbie dreamhouse adventures premium modded apk 2022
-
Benefits of using the APK
-
Some of the benefits of using Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK are:
-
-
You can save your time and money by not having to watch ads or make in-app purchases.
-
You can have more fun and creativity by having access to everything in the game.
-
You can experience the full potential of the game without any restrictions.
-
You can play offline without needing an internet connection.
-
-
How to download and install Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK?
-
Steps to follow
-
To download and install Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK on your Android device, you need to follow these steps:
- Step 1: Go to a trusted website that provides the APK file, such as [APKPure] or [APKCombo].
-
- Step 2: Search for Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK and click on the download button.
-
- Step 3: Wait for the download to finish and then open the APK file.
-
- Step 4: If you see a warning message that says "Install unknown apps", you need to enable the option to allow installation from unknown sources. To do this, go to your device settings, then security, then unknown sources, and turn it on.
-
- Step 5: After that, you can proceed with the installation by following the instructions on the screen.
-
- Step 6: Once the installation is complete, you can launch the game and enjoy it.
-
Tips and warnings
-
Here are some tips and warnings that you should keep in mind when using Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK:
-
-
Make sure that you download the APK file from a reliable and safe website. Avoid downloading from unknown or suspicious sources that might contain malware or viruses.
-
Before installing the APK file, you should uninstall the original game if you have it on your device. This will prevent any conflicts or errors that might occur.
-
Be careful not to update the game from the Google Play Store or any other source. This will overwrite the APK file and remove all the unlocked features and items.
-
Backup your data before installing the APK file. This will help you restore your progress in case something goes wrong.
-
-
How to play Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK?
-
Explore the dreamhouse and customize it
-
One of the main attractions of Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK is that you can explore and customize your own dreamhouse. You can choose from different rooms, such as the kitchen, the living room, the bedroom, the bathroom, and more. You can also decorate them with various wallpapers, furniture, decorations, and more. You can even change the color and style of each item. You can also unlock new rooms and items as you play. You can create your own dreamhouse according to your taste and imagination.
-
Join Barbie and her friends in various activities
-
Another fun aspect of Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK is that you can join Barbie and her friends in various activities. You can bake delicious cakes, dance to catchy music, have pool parties, take care of cute pets, and more. You can also dress up Barbie and her friends with hundreds of outfits, hairstyles, and accessories. You can even design your own fashion and share it with other players. You can also watch episodes from the animated series and get inspired by them. You can have a lot of fun and adventure with Barbie and her friends.
-
Conclusion
-
Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK is a great game for girls who love Barbie and her fabulous lifestyle. It allows you to create your own dreamhouse, design your own fashion, and join Barbie and her friends in various adventures. It also gives you access to everything in the game without paying or watching ads. You can download and install this APK on your Android device by following the steps and tips we have provided in this article. We hope you enjoy playing this game and have a wonderful time.
-
FAQs
-
Here are some frequently asked questions about Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK:
-
-
Q: Is Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK safe to use?
-
A: Yes, as long as you download it from a trusted website that provides the original and unmodified APK file. However, you should always be careful when installing apps from unknown sources and scan them for any malware or viruses.
-
Q: Is Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK compatible with my device?
-
A: The APK file should work on most Android devices that support Android 4.4 or higher. However, some devices might have compatibility issues or performance problems depending on their specifications.
-
Q: How can I contact the developer of Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK?
-
A: The developer of this APK is not affiliated with Budge Studios, the official developer of Barbie Dreamhouse Adventures - The game. You can contact them through their website or email address, which you can find on the APK file or the website where you downloaded it.
-
Q: Can I play Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK with my friends?
-
A: Yes, you can play this game with your friends online. You can visit their dreamhouses, chat with them, and join them in activities. You can also invite them to your dreamhouse and show them your creations.
-
Q: What are some alternatives to Barbie Dreamhouse Adventures Tudo Desbloqueado 2022 APK?
-
A: If you are looking for other games that are similar to Barbie Dreamhouse Adventures, you might want to check out these games:
-
-
Barbie Fashion Closet: A game where you can dress up Barbie and her friends with different outfits and accessories.
-
Barbie Magical Fashion: A game where you can transform Barbie into a princess, a mermaid, a fairy, or a hero.
-
Barbie Dreamtopia: A game where you can explore the magical worlds of Dreamtopia with Barbie and her sister Chelsea.
-
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Brawlhalla on Your Mobile Device with the 32bit APK File.md b/spaces/1phancelerku/anime-remove-background/Enjoy Brawlhalla on Your Mobile Device with the 32bit APK File.md
deleted file mode 100644
index 49dfd91b459d51bdadd2e741f57ec78a6ec746ca..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy Brawlhalla on Your Mobile Device with the 32bit APK File.md
+++ /dev/null
@@ -1,111 +0,0 @@
-
-
Brawlhalla 32bit APK: How to Download and Play the Free Platform Fighting Game on Android
-
If you are looking for a fun and exciting fighting game that you can play on your mobile device, you should check out Brawlhalla. Brawlhalla is a free platform fighting game that supports up to 8 players online or local, with full cross-play across different platforms. You can choose from over 50 unique characters, each with their own weapons and abilities, and compete in various modes and maps. In this article, we will show you how to download and play Brawlhalla 32bit APK on your Android device.
-
What is Brawlhalla?
-
Brawlhalla is a game created and developed by Blue Mammoth Games and published by Ubisoft Entertainment. It was released in 2017 for PC, PS4, Xbox One, and Nintendo Switch, and in 2020 for iOS and Android. It has over 80 million players worldwide and is one of the most popular fighting games on Steam.
A brief introduction to the game's features, modes, and characters
-
Brawlhalla features simple controls and one-button special moves that make it easy for anyone to pick up and play. You can also customize your controls and settings according to your preference. The game has many features that make it fun and engaging, such as:
-
-
Online Ranked 1v1 & 2v2 - Climb the ranked ladder from Tin up to Platinum and beyond by fighting against players near your skill level.
-
4 Player Online Free for All - Casual matches where four fighters enter, but only one can win.
-
Cross-play Custom Rooms - Invite up to 8 friends on all platforms to a huge variety of custom matches, such as 4v4s, 1v3, 2v2, FFA, and more.
-
Many Game Modes - Mix things up with Brawlball, Bombsketball, Capture the Flag, Kung-Foot, and many more fun party game modes.
-
The Training Room - Practice combos and setups inside the Training Room. Look at detailed frame data, hitboxes, hurtboxes, and sharpen your skills.
-
Weekly Rotation - Every week, there is a new Legend Rotation of eight free characters that you can play. You can also earn gold to unlock more Legends by playing any online game mode.
-
Battle Pass - Every season, there is a new Battle Pass that offers exclusive rewards such as skins, colors, avatars, emotes, sidekicks, KO effects, podiums, and more.
-
Crossovers - Brawlhalla features crossover events with other popular franchises such as Adventure Time, WWE, Steven Universe, Ben 10, The Walking Dead, Tomb Raider, Hellboy, Shovel Knight, Rayman, and more.
-
-
Brawlhalla has a diverse roster of over 50 Legends that you can choose from. Each Legend has their own stats (Strength, Dexterity, Defense, Speed), two weapons (Sword, Hammer, Spear, Axe, Rocket Lance, Katars, Blaster, Bow, Gauntlets, Scythe, Cannon, Orb, Greatsword),
How to Download Brawlhalla 32bit APK on Android
-
Brawlhalla is available for free on the Google Play Store for Android devices. However, some older devices may not support the game or run it smoothly. If you have a 32-bit Android device, you may need to download the Brawlhalla 32bit APK file from a trusted source and install it manually. Here are the steps to do that:
-
-
Go to a reputable website that offers APK files, such as APKPure, APKMirror, or Uptodown. Search for Brawlhalla and download the latest version of the 32bit APK file.
-
Before installing the APK file, you need to enable the installation of apps from unknown sources on your device. To do that, go to Settings > Security > Unknown Sources and toggle it on.
-
Locate the downloaded APK file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
-
Once the installation is done, you can launch Brawlhalla from your app drawer and enjoy the game.
-
-
Note: Make sure you have enough storage space on your device before downloading and installing the APK file. Also, be careful when downloading APK files from third-party sources, as some of them may contain malware or viruses. Always scan the files with a reliable antivirus app before installing them.
-
How to Play Brawlhalla on Android
-
Brawlhalla is a game that requires quick reflexes, strategic thinking, and skillful execution. Whether you are playing online or offline, you need to know how to control your character and use your weapons effectively. Here are some basic tips and tricks to help you play Brawlhalla on Android:
-
The basic controls and mechanics of the game
-
Brawlhalla has a simple and intuitive control scheme that you can customize according to your preference. You can also choose between different control modes, such as touch screen, virtual joystick, or external controller. The default touch screen controls are as follows:
-
-
Tap anywhere on the left side of the screen to move your character left or right.
-
Swipe up or down on the left side of the screen to jump or drop down from a platform.
-
Tap on the right side of the screen to perform a light attack with your weapon.
-
Swipe in any direction on the right side of the screen to perform a heavy attack or a signature move with your weapon.
-
Tap on the weapon icon on the bottom right corner of the screen to pick up or throw a weapon.
-
Tap on the dodge icon on the bottom left corner of the screen to dodge an incoming attack or perform a recovery move in mid-air.
-
-
The basic mechanics of Brawlhalla are similar to other platform fighting games, such as Super Smash Bros. The goal is to knock out your opponents by dealing enough damage to them and sending them flying off the stage. You can see how much damage you have taken by looking at your character's color and percentage. The more damage you take, the redder your character becomes and the higher your percentage goes. The higher your percentage, the farther you fly when hit by an attack.
-
brawlhalla android game free download
-brawlhalla cross-play platform fighting
-brawlhalla 50 unique characters
-brawlhalla online ranked matches
-brawlhalla custom rooms with friends
-brawlhalla fun party game modes
-brawlhalla training room combos
-brawlhalla best-in-class spectating
-brawlhalla match recording and replay
-brawlhalla dozens of maps
-brawlhalla single player tournament mode
-brawlhalla online brawl-of-the-week
-brawlhalla experimental mode
-brawlhalla regional servers for low-latency
-brawlhalla frequent updates and events
-brawlhalla career history and rewards
-brawlhalla ranked seasons and ladder
-brawlhalla friendly devs and support
-brawlhalla fair free-to-play model
-brawlhalla all legends pack unlock
-brawlhalla crossover characters and skins
-brawlhalla valhalla legends and lore
-brawlhalla smash and fight among legends
-brawlhalla free-for-all casual matches
-brawlhalla 1v1 and 2v2 competitive modes
-brawlhalla 4v4 and 8 player battles
-brawlhalla kung-foot and bombsketball modes
-brawlhalla capture the flag and brawlball modes
-brawlhalla frame data and hitboxes
-brawlhalla hurtboxes and setups
-brawlhalla esports tournaments and prizes
-brawlhalla community colors and codes
-brawlhalla mammoth coins and gold
-brawlhalla battle pass and missions
-brawlhalla patch notes and balance changes
-brawlhalla tips and tricks for beginners
-brawlhalla guides and tutorials for advanced players
-brawlhalla best legends and weapons for each mode
-brawlhalla combos and strings for each weapon
-brawlhalla sigs and stats for each legend
-brawlhalla taunts and emotes for each legend
-brawlhalla podiums and sidekicks for each legend
-brawlhalla avatars and banners for each legend
-brawlhalla KO effects and weapon skins for each legend
-brawlhalla chest rotations and sales
-
You can use different weapons and items to deal damage and knock out your opponents. Weapons spawn randomly on the stage and can be picked up by any player. Each weapon has its own moveset and signature moves that vary depending on which Legend you are using. Items such as bombs, mines, spike balls, and horns can also be thrown at your opponents to damage them or disrupt their movement.
-
The tips and tricks to improve your skills and win more matches
-
Brawlhalla is a game that rewards skill, practice, and creativity. There are many ways to improve your skills and win more matches, such as:
-
-
Learn how to use each weapon and Legend effectively. Experiment with different combinations of weapons and Legends and find out what suits your playstyle best. You can also watch tutorials, guides, and gameplay videos from other players online to learn from them.
-
Practice your combos and setups in the Training Room. You can use the Training Room to practice your moves, combos, setups, edgeguards, recoveries, and more. You can also adjust various settings such as gravity, damage, hitboxes, hurtboxes, frame data, etc., to help you analyze and improve your gameplay.
-
-
Play online with other players of different skill levels. Playing online with other players is one of the best ways to improve your skills and learn from your mistakes. You can play online ranked matches to climb the ladder and earn rewards, or play online casual matches to have fun and experiment with different strategies. You can also join custom rooms with your friends or other players and play various game modes and settings.
-
Watch replays of your matches and analyze your performance. You can watch replays of your matches and see what you did right and what you did wrong. You can also pause, rewind, fast-forward, and slow down the replay to see every detail of the match. You can use replays to identify your strengths and weaknesses, learn from your opponents, and improve your decision-making and execution.
-
Have fun and enjoy the game. Brawlhalla is a game that is meant to be fun and enjoyable for everyone. Don't get too frustrated or angry if you lose or make mistakes. Instead, use them as opportunities to grow and improve. Don't be afraid to try new things and experiment with different weapons and Legends. Don't be too hard on yourself or others, and don't forget to have fun.
-
-
Conclusion
-
Brawlhalla is a free platform fighting game that you can play on your Android device with the Brawlhalla 32bit APK file. It is a game that is easy to learn but hard to master, with many features, modes, characters, and items to choose from. It is a game that is fun and exciting for both casual and competitive players, with full cross-play support across different platforms. If you are looking for a game that will keep you entertained for hours, you should definitely give Brawlhalla a try.
-
FAQs
-
Is Brawlhalla free to play?
-
Yes, Brawlhalla is free to play on all platforms. You can download it from the Google Play Store for Android devices, or from the official website for PC, PS4, Xbox One, Nintendo Switch, iOS devices. You can also download the Brawlhalla 32bit APK file from a trusted source if you have a 32-bit Android device.
-
Is Brawlhalla safe to download?
-
Yes, Brawlhalla is safe to download from the official sources mentioned above. However, if you are downloading the Brawlhalla 32bit APK file from a third-party source, you should be careful and scan the file with a reliable antivirus app before installing it. Some APK files may contain malware or viruses that can harm your device or compromise your privacy.
-
How do I update Brawlhalla on Android?
-
If you have downloaded Brawlhalla from the Google Play Store, you can update it automatically or manually from there. If you have downloaded the Brawlhalla 32bit APK file from a third-party source, you will need to download the latest version of the APK file from the same source and install it over the existing one.
-
How do I get more gold in Brawlhalla?
-
You can get more gold in Brawlhalla by playing any online game mode, such as ranked, free for all, custom rooms, etc. You can also get more gold by completing daily missions and weekly challenges, or by leveling up your account or your Legends.
-
How do I get more skins in Brawlhalla?
-
You can get more skins in Brawlhalla by purchasing them with Mammoth Coins, which are the premium currency of the game. You can buy Mammoth Coins with real money through in-app purchases or through official partner websites. You can also get some skins for free by participating in events, promotions, giveaways, tournaments, etc.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/FNAF 4 Download Free The Ultimate Guide to Install and Play the Game.md b/spaces/1phancelerku/anime-remove-background/FNAF 4 Download Free The Ultimate Guide to Install and Play the Game.md
deleted file mode 100644
index 0c3dda9a115c435de99aace7553d4e29c35af411..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/FNAF 4 Download Free The Ultimate Guide to Install and Play the Game.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
How to Download and Play Five Nights at Freddy's 4 for Free
-
If you are a fan of horror games, you might have heard of Five Nights at Freddy's, a popular series of survival horror games that have terrified millions of players around the world. The fourth installment of the series, Five Nights at Freddy's 4, is arguably the most terrifying and challenging one yet. In this article, we will tell you what Five Nights at Freddy's 4 is, why you should play it, and how you can download and play it for free on your PC or mobile device.
-
What is Five Nights at Freddy's 4?
-
Five Nights at Freddy's 4, originally named Five Nights at Freddy's: The Final Chapter in development, is an indie point-and-click survival horror game developed and published by Scott Cawthon, and the fourth installment of the Five Nights at Freddy's series. The game is a prequel to Five Nights at Freddy's 2, and takes place in 1983, chronologically being the first game in the series.
The game takes place in the bedroom of a child, where the player must avoid attack by nightmarish animatronics that stalk them. Instead of having a monitor to ward away the animatronics, the player must instead check the doors, closet, and the bed and utilize a flashlight to ward away any nightmare animatronics outside the room, relying on environmental noises to know if something is approaching or about to attack.
-
Why should you play Five Nights at Freddy's 4?
-
Five Nights at Freddy's 4 is a game that will test your nerves, reflexes, and patience. It is not a game for the faint-hearted, as it features some of the most terrifying jumpscares and sound effects in gaming history. The game also has a deep and mysterious lore that will keep you hooked and intrigued. The game has received mostly positive reviews from critics and players alike, praising its horror, suspense, and challenge.
-
If you are looking for a game that will make you scream, sweat, and jump out of your seat, then Five Nights at Freddy's 4 is the game for you. It is a game that will make you feel like you are in a nightmare that you can't wake up from. It is a game that will make you question your sanity and reality. It is a game that will make you experience fear like never before.
-
How to download Five Nights at Freddy's 4 for free on PC?
-
If you want to play Five Nights at Freddy's 4 on your PC for free, you can use BlueStacks, an Android emulator that allows you to run Android apps and games on your PC. Here are the steps to download and play Five Nights at Freddy's 4 for free on PC using BlueStacks:
-
-
Download BlueStacks from [here](^1^) and install it on your PC.
-
Launch BlueStacks and sign in with your Google account.
-
Search for Five Nights at Freddy's 4 in the Google Play Store app and install it.
-
Open Five Nights at Freddy's 4 and enjoy playing it on your PC.
-
-
You can also customize the settings, controls, and graphics of the game according to your preference using BlueStacks. You can also record and stream your gameplay using BlueStacks' built-in features.
-
fnaf 4 full game free download
-fnaf 4 pc download free
-fnaf 4 android download free
-fnaf 4 apk download free
-fnaf 4 demo download free
-fnaf 4 online free no download
-fnaf 4 free download windows 10
-fnaf 4 free download steam
-fnaf 4 free download mac
-fnaf 4 free download ios
-fnaf 4 free download for laptop
-fnaf 4 free download unblocked
-fnaf 4 free download ocean of games
-fnaf 4 free download mega
-fnaf 4 free download mediafire
-fnaf 4 free download softonic
-fnaf 4 free download gamejolt
-fnaf 4 free download uptodown
-fnaf 4 free download apkpure
-fnaf 4 free download android apk
-fnaf 4 free download android no virus
-fnaf 4 free download android full version
-fnaf 4 free download android aptoide
-fnaf 4 free download android mob.org
-fnaf 4 free download android mod apk
-fnaf 4 free download pc full version
-fnaf 4 free download pc no virus
-fnaf 4 free download pc windows 7
-fnaf 4 free download pc windows xp
-fnaf 4 free download pc rar
-fnaf 4 free download pc zip file
-fnaf 4 free download pc highly compressed
-fnaf 4 free download pc crack
-fnaf 4 free download pc setup.exe
-fnaf 4 free download pc without steam
-how to get fnaf 4 for free on pc
-how to get fnaf 4 for free on android
-how to get fnaf 4 for free on ios
-how to get fnaf 4 for free on mac
-how to get fnaf 4 for free on steam
-how to play fnaf 4 for free online no download
-how to play fnaf 4 for free on pc without downloading it
-how to play fnaf 4 for free on android without downloading it
-how to play fnaf 4 for free on ios without downloading it
-how to play fnaf 4 for free on mac without downloading it
-
How to download Five Nights at Freddy's 4 for free on mobile?
-
If you want to play Five Nights at Freddy's 4 on your mobile device for free, you can use Google Play or App Store to download the game on your Android or iOS device. Here are the steps to download and play Five Nights at Freddy's 4 for free on mobile using Google Play or App Store:
-
-
Open Google Play or App Store on your device and search for Five Nights at Freddy's 4.
-
Tap on the game and install it on your device.
-
Open Five Nights at Freddy's 4 and enjoy playing it on your mobile device.
-
-
You can also adjust the settings, controls, and sound of the game according to your preference using the game's menu. You can also use headphones or earphones to enhance the immersion and horror of the game.
-
How to play Five Nights at Freddy's 4 effectively?
-
Five Nights at Freddy's 4 is a game that requires skill, strategy, and concentration. It is not a game that you can play casually or mindlessly. It is a game that will challenge you and make you think fast. Here are some gameplay tips and strategies to help you play Five Nights at Freddy's 4 effectively:
-
-
Listen carefully to the sounds. The sounds are your main source of information in the game. You need to listen to the breathing, footsteps, laughter, and other noises that indicate the presence and location of the nightmare animatronics. If you hear breathing at the door, close it until you hear them leave. If you hear footsteps or laughter, flash your light at the door or closet to scare them away. If you hear nothing, check the bed or the closet for any plushies or animatronics.
-
Use your flashlight wisely. Your flashlight is your only weapon in the game, but it also consumes power and attracts attention. You need to use it sparingly and strategically. You need to flash it at the door or closet to check for any animatronics or plushies, but only for a brief moment. If you flash it too long or too often, you will run out of power or attract more animatronics. You also need to avoid flashing it when you hear breathing, as that will trigger a jumpscare.
-
Manage your time and power. The game lasts from 12 AM to 6 AM, which is equivalent to about 8 minutes in real time. You need to survive each night without running out of power or getting jumpscared by the animatronics. You need to balance your time and power between checking the doors, closet, bed, and hallway. You need to prioritize the most dangerous animatronics, such as Nightmare Fredbear and Nightmare, who can appear from any direction and require quick reactions.
-
-
Conclusion
-
Five Nights at Freddy's 4 is a game that will make you experience horror like never before. It is a game that will make you scream, sweat, and jump out of your seat. It is a game that will make you question your sanity and reality. It is a game that will make you feel like you are in a nightmare that you can't wake up from.
-
If you are brave enough to face your fears, then download and play Five Nights at Freddy's 4 for free on your PC or mobile device today. You can use BlueStacks, Google Play, or App Store to download and play the game easily and conveniently. You can also use our gameplay tips and strategies to help you survive the nights and avoid the jumpscares.
-
Are you ready to face the nightmare? Are you ready to play Five Nights at Freddy's 4?
-
FAQs
-
What is the story of Five Nights at Freddy's 4?
-
The story of Five Nights at Freddy's 4 is told through minigames that occur between each night. The minigames reveal that the game takes place in 1983, and follows a young boy who is tormented by his older brother and his friends who wear masks of Freddy Fazbear and his friends. The boy is also terrified of Fredbear's Family Diner, a restaurant that features animatronic mascots that entertain children during the day. On his birthday, his brother and his friends force him to get close to Fredbear, who bites his head, causing the infamous Bite of '83. The boy is then hospitalized and suffers from nightmares of the animatronics, which are the gameplay segments of the game. The boy eventually dies from his injuries, and is comforted by a voice that tells him that he will put him back together.
-
Who are the nightmare animatronics in Five Nights at Freddy's 4?
-
The nightmare animatronics in Five Nights at Freddy's 4 are twisted and monstrous versions of the original animatronics from the previous games. They are the manifestations of the boy's fear and trauma, and they include:
-
-
Nightmare Freddy: A dark brown bear with three smaller Freddles on his body. He can appear from the bed or the right hall.
-
Nightmare Bonnie: A dark blue rabbit with sharp teeth and claws. He can appear from the left hall or the closet.
-
Nightmare Chica: A dark yellow chicken with a cupcake on a plate. She can appear from the right hall or the closet.
-
Nightmare Foxy: A dark red fox with a hook and an eye patch. He can appear from the closet or the left hall.
-
Nightmare Fredbear: A golden bear with purple hat and bow tie. He can appear from any direction and replaces all other animatronics on Night 5 and 6.
-
Nightmare: A black and transparent version of Nightmare Fredbear with white eyes and teeth. He can appear from any direction and replaces Nightmare Fredbear on Night 7 and 8.
-
Plushtrap: A small green rabbit with a spring-loaded mechanism. He can appear in a separate minigame called Fun with Plushtrap, where the player must stop him on an X mark using a flashlight.
-
Nightmarionne: A black and white puppet with long arms and legs. He can appear in the Halloween Edition of the game, where he replaces Plushtrap.
-
Nightmare Mangle: A mangled version of Foxy with multiple heads and limbs. He can appear in the Halloween Edition of the game, where he replaces Nightmare Foxy.
-
Jack-O-Bonnie: A dark orange rabbit with a jack-o-lantern head. He can appear in the Halloween Edition of the game, where he replaces Nightmare Bonnie.
-
Jack-O-Chica: A dark orange chicken with a jack-o-lantern head and a pumpkin on a plate. She can appear in the Halloween Edition of the game, where she replaces Nightmare Chica.
-
-
What are the secrets and easter eggs in Five Nights at Freddy's 4?
-
Five Nights at Freddy's 4 is a game that is full of secrets and easter eggs that add to its lore and mystery. Some of the secrets and easter eggs in Five Nights at Freddy's 4 are:
-
-
The clock ending: If the player collects four keys hidden in various minigames, they can unlock a secret ending where they play as Fredbear plushie and guide the crying child to a locked box that contains "the pieces put together". However, the box never opens, leaving its contents unknown.
-
The newspaper clippings: If the player looks closely at some of the newspapers on the walls of the minigames, they can see some references to previous games, such as "Fredbear's Family Diner to close after tragedy", "Local pizzeria threatened with shutdown over sanitation", and "Local pizzeria said to close by year's end".
-
The IV drip, flowers, and pills: If the player looks closely at some of the objects in the bedroom, they can see an IV drip, flowers, and pills that appear and disappear randomly. These objects imply that the child is in a coma and is being treated in a hospital.
-
The phone call: If the player listens carefully to the background noise of Night 1, they can hear a distorted version of the phone call from Five Nights at Freddy's 1, where Phone Guy mentions the Bite of '87. This suggests that the game is connected to the first game and that the Bite of '83 and the Bite of '87 are two separate incidents.
-
The purple guy: If the player completes the Night 3 minigame, they can see a brief glimpse of a man in a purple uniform putting a Spring Bonnie suit on an employee. This man is implied to be William Afton, the main antagonist of the series and the killer of the children who possess the animatronics.
-
-
Is Five Nights at Freddy's 4 the last game in the series?
-
No, Five Nights at Freddy's 4 is not the last game in the series. Although it was originally intended to be the final chapter of the original story, Scott Cawthon later announced that he would continue to make more games in the series, as well as spin-offs, novels, and movies. Some of the games that have been released after Five Nights at Freddy's 4 are:
-
-
Five Nights at Freddy's: Sister Location: A game that takes place in a sister location of Freddy Fazbear's Pizza, where the player must survive against new animatronics called Circus Baby, Ballora, Funtime Freddy, and Funtime Foxy.
-
Freddy Fazbear's Pizzeria Simulator: A game that combines a tycoon simulator and a survival horror game, where the player must manage their own pizzeria and deal with salvaged animatronics that try to kill them.
-
Ultimate Custom Night: A game that features 50 selectable animatronics from previous games, where the player can customize their difficulty and challenge themselves to survive as long as possible.
-
Five Nights at Freddy's VR: Help Wanted: A game that features virtual reality versions of classic and original minigames set in the Five Nights at Freddy's universe.
-
Five Nights at Freddy's AR: Special Delivery: A game that uses augmented reality to bring animatronics to life in the real world, where the player must collect, repair, and fight them.
-
Five Nights at Freddy's: Security Breach: A game that is set to be released in late 2021, where the player will explore a new location called Freddy Fazbear's Mega Pizza Plex, and face new animatronics such as Glamrock Freddy, Glamrock Chica, Montgomery Gator, Roxanne Wolf, and Vanny.
-
-
Where can I find more information about Five Nights at Freddy's 4?
-
If you want to find more information about Five Nights at Freddy's 4, you can visit some of these websites:
-
-
Website
Description
-
[Five Nights at Freddy's Wiki]
A comprehensive wiki that contains information about the characters, locations, gameplay, lore, and secrets of Five Nights at Freddy's 4 and other games in the series.
-
[Scott Games]
The official website of Scott Cawthon, the creator of Five Nights at Freddy's 4 and other games in the series. The website features teasers, updates, and announcements about his projects.
-
[Steam]
The official store page of Five Nights at Freddy's 4 on Steam, where you can buy the game, read reviews, and join discussions.
-
[YouTube]
A popular video-sharing platform where you can watch gameplay videos, trailers, theories, and reactions of Five Nights at Freddy's 4 and other games in the series.
-
[Reddit]
A popular online community where you can join subreddits, such as r/fivenightsatfreddys, r/fnaf, and r/fnaf4, to share your thoughts, opinions, fan art, memes, and questions about Five Nights at Freddy's 4 and other games in the series.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/2023Liu2023/bingo/src/components/user-menu.tsx b/spaces/2023Liu2023/bingo/src/components/user-menu.tsx
deleted file mode 100644
index 9bd1edc9cf9f39b63629b021f0c1186b1a7c1341..0000000000000000000000000000000000000000
--- a/spaces/2023Liu2023/bingo/src/components/user-menu.tsx
+++ /dev/null
@@ -1,113 +0,0 @@
-'use client'
-
-import { useEffect, useState } from 'react'
-import Image from 'next/image'
-import { toast } from 'react-hot-toast'
-import { Button } from '@/components/ui/button'
-import pkg from '../../package.json'
-import {
- DropdownMenu,
- DropdownMenuContent,
- DropdownMenuItem,
- DropdownMenuSeparator,
- DropdownMenuTrigger
-} from '@/components/ui/dropdown-menu'
-import { IconCopy, IconExternalLink, IconGitHub } from '@/components/ui/icons'
-import SettingIcon from '@/assets/images/settings.svg'
-import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard'
-
-export function UserMenu() {
- const [host, setHost] = useState('')
- const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 })
- useEffect(() => {
- setHost(location.host)
- }, [])
-
- useEffect(() => {
- if (isCopied) {
- toast.success('复制成功')
- }
- }, [isCopied])
- return (
-
- )
-}
diff --git a/spaces/2ndelement/voicevox/voicevox_engine/__init__.py b/spaces/2ndelement/voicevox/voicevox_engine/__init__.py
deleted file mode 100644
index ca702104050d218302f1b0850d0b679eb8c1c617..0000000000000000000000000000000000000000
--- a/spaces/2ndelement/voicevox/voicevox_engine/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-__version__ = "latest"
diff --git a/spaces/AIFILMS/ControlNet-Video/README.md b/spaces/AIFILMS/ControlNet-Video/README.md
deleted file mode 100644
index beed4e183a8041ad6d013c448388c7363207454f..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/ControlNet-Video/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: ControlNet-Video
-emoji: 🕹
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 3.18.0
-python_version: 3.10.9
-app_file: app.py
-pinned: false
-duplicated_from: fffiloni/ControlNet-Video
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/__init__.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/conformer/layers.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/conformer/layers.py
deleted file mode 100644
index 65b6414128dc9499940c1dcfd38ab27f119152cd..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/conformer/layers.py
+++ /dev/null
@@ -1,260 +0,0 @@
-from torch import nn
-import torch
-
-from text_to_speech.modules.commons.layers import LayerNorm
-
-
-class ConvolutionModule(nn.Module):
- """ConvolutionModule in Conformer model.
- Args:
- channels (int): The number of channels of conv layers.
- kernel_size (int): Kernerl size of conv layers.
- """
-
- def __init__(self, channels, kernel_size, activation=nn.ReLU(), bias=True):
- """Construct an ConvolutionModule object."""
- super(ConvolutionModule, self).__init__()
- # kernerl_size should be a odd number for 'SAME' padding
- assert (kernel_size - 1) % 2 == 0
-
- self.pointwise_conv1 = nn.Conv1d(
- channels,
- 2 * channels,
- kernel_size=1,
- stride=1,
- padding=0,
- bias=bias,
- )
- self.depthwise_conv = nn.Conv1d(
- channels,
- channels,
- kernel_size,
- stride=1,
- padding=(kernel_size - 1) // 2,
- groups=channels,
- bias=bias,
- )
- self.norm = nn.BatchNorm1d(channels)
- self.pointwise_conv2 = nn.Conv1d(
- channels,
- channels,
- kernel_size=1,
- stride=1,
- padding=0,
- bias=bias,
- )
- self.activation = activation
-
- def forward(self, x):
- """Compute convolution module.
- Args:
- x (torch.Tensor): Input tensor (#batch, time, channels).
- Returns:
- torch.Tensor: Output tensor (#batch, time, channels).
- """
- # exchange the temporal dimension and the feature dimension
- x = x.transpose(1, 2)
-
- # GLU mechanism
- x = self.pointwise_conv1(x) # (batch, 2*channel, dim)
- x = nn.functional.glu(x, dim=1) # (batch, channel, dim)
-
- # 1D Depthwise Conv
- x = self.depthwise_conv(x)
- x = self.activation(self.norm(x))
-
- x = self.pointwise_conv2(x)
-
- return x.transpose(1, 2)
-
-
-class MultiLayeredConv1d(torch.nn.Module):
- """Multi-layered conv1d for Transformer block.
- This is a module of multi-leyered conv1d designed
- to replace positionwise feed-forward network
- in Transforner block, which is introduced in
- `FastSpeech: Fast, Robust and Controllable Text to Speech`_.
- .. _`FastSpeech: Fast, Robust and Controllable Text to Speech`:
- https://arxiv.org/pdf/1905.09263.pdf
- """
-
- def __init__(self, in_chans, hidden_chans, kernel_size, dropout_rate):
- """Initialize MultiLayeredConv1d module.
- Args:
- in_chans (int): Number of input channels.
- hidden_chans (int): Number of hidden channels.
- kernel_size (int): Kernel size of conv1d.
- dropout_rate (float): Dropout rate.
- """
- super(MultiLayeredConv1d, self).__init__()
- self.w_1 = torch.nn.Conv1d(
- in_chans,
- hidden_chans,
- kernel_size,
- stride=1,
- padding=(kernel_size - 1) // 2,
- )
- self.w_2 = torch.nn.Conv1d(
- hidden_chans,
- in_chans,
- kernel_size,
- stride=1,
- padding=(kernel_size - 1) // 2,
- )
- self.dropout = torch.nn.Dropout(dropout_rate)
-
- def forward(self, x):
- """Calculate forward propagation.
- Args:
- x (torch.Tensor): Batch of input tensors (B, T, in_chans).
- Returns:
- torch.Tensor: Batch of output tensors (B, T, hidden_chans).
- """
- x = torch.relu(self.w_1(x.transpose(-1, 1))).transpose(-1, 1)
- return self.w_2(self.dropout(x).transpose(-1, 1)).transpose(-1, 1)
-
-
-class Swish(torch.nn.Module):
- """Construct an Swish object."""
-
- def forward(self, x):
- """Return Swich activation function."""
- return x * torch.sigmoid(x)
-
-
-class EncoderLayer(nn.Module):
- """Encoder layer module.
- Args:
- size (int): Input dimension.
- self_attn (torch.nn.Module): Self-attention module instance.
- `MultiHeadedAttention` or `RelPositionMultiHeadedAttention` instance
- can be used as the argument.
- feed_forward (torch.nn.Module): Feed-forward module instance.
- `PositionwiseFeedForward`, `MultiLayeredConv1d`, or `Conv1dLinear` instance
- can be used as the argument.
- feed_forward_macaron (torch.nn.Module): Additional feed-forward module instance.
- `PositionwiseFeedForward`, `MultiLayeredConv1d`, or `Conv1dLinear` instance
- can be used as the argument.
- conv_module (torch.nn.Module): Convolution module instance.
- `ConvlutionModule` instance can be used as the argument.
- dropout_rate (float): Dropout rate.
- normalize_before (bool): Whether to use layer_norm before the first block.
- concat_after (bool): Whether to concat attention layer's input and output.
- if True, additional linear will be applied.
- i.e. x -> x + linear(concat(x, att(x)))
- if False, no additional linear will be applied. i.e. x -> x + att(x)
- """
-
- def __init__(
- self,
- size,
- self_attn,
- feed_forward,
- feed_forward_macaron,
- conv_module,
- dropout_rate,
- normalize_before=True,
- concat_after=False,
- ):
- """Construct an EncoderLayer object."""
- super(EncoderLayer, self).__init__()
- self.self_attn = self_attn
- self.feed_forward = feed_forward
- self.feed_forward_macaron = feed_forward_macaron
- self.conv_module = conv_module
- self.norm_ff = LayerNorm(size) # for the FNN module
- self.norm_mha = LayerNorm(size) # for the MHA module
- if feed_forward_macaron is not None:
- self.norm_ff_macaron = LayerNorm(size)
- self.ff_scale = 0.5
- else:
- self.ff_scale = 1.0
- if self.conv_module is not None:
- self.norm_conv = LayerNorm(size) # for the CNN module
- self.norm_final = LayerNorm(size) # for the final output of the block
- self.dropout = nn.Dropout(dropout_rate)
- self.size = size
- self.normalize_before = normalize_before
- self.concat_after = concat_after
- if self.concat_after:
- self.concat_linear = nn.Linear(size + size, size)
-
- def forward(self, x_input, mask, cache=None):
- """Compute encoded features.
- Args:
- x_input (Union[Tuple, torch.Tensor]): Input tensor w/ or w/o pos emb.
- - w/ pos emb: Tuple of tensors [(#batch, time, size), (1, time, size)].
- - w/o pos emb: Tensor (#batch, time, size).
- mask (torch.Tensor): Mask tensor for the input (#batch, time).
- cache (torch.Tensor): Cache tensor of the input (#batch, time - 1, size).
- Returns:
- torch.Tensor: Output tensor (#batch, time, size).
- torch.Tensor: Mask tensor (#batch, time).
- """
- if isinstance(x_input, tuple):
- x, pos_emb = x_input[0], x_input[1]
- else:
- x, pos_emb = x_input, None
-
- # whether to use macaron style
- if self.feed_forward_macaron is not None:
- residual = x
- if self.normalize_before:
- x = self.norm_ff_macaron(x)
- x = residual + self.ff_scale * self.dropout(self.feed_forward_macaron(x))
- if not self.normalize_before:
- x = self.norm_ff_macaron(x)
-
- # multi-headed self-attention module
- residual = x
- if self.normalize_before:
- x = self.norm_mha(x)
-
- if cache is None:
- x_q = x
- else:
- assert cache.shape == (x.shape[0], x.shape[1] - 1, self.size)
- x_q = x[:, -1:, :]
- residual = residual[:, -1:, :]
- mask = None if mask is None else mask[:, -1:, :]
-
- if pos_emb is not None:
- x_att = self.self_attn(x_q, x, x, pos_emb, mask)
- else:
- x_att = self.self_attn(x_q, x, x, mask)
-
- if self.concat_after:
- x_concat = torch.cat((x, x_att), dim=-1)
- x = residual + self.concat_linear(x_concat)
- else:
- x = residual + self.dropout(x_att)
- if not self.normalize_before:
- x = self.norm_mha(x)
-
- # convolution module
- if self.conv_module is not None:
- residual = x
- if self.normalize_before:
- x = self.norm_conv(x)
- x = residual + self.dropout(self.conv_module(x))
- if not self.normalize_before:
- x = self.norm_conv(x)
-
- # feed forward module
- residual = x
- if self.normalize_before:
- x = self.norm_ff(x)
- x = residual + self.ff_scale * self.dropout(self.feed_forward(x))
- if not self.normalize_before:
- x = self.norm_ff(x)
-
- if self.conv_module is not None:
- x = self.norm_final(x)
-
- if cache is not None:
- x = torch.cat([cache, x], dim=1)
-
- if pos_emb is not None:
- return (x, pos_emb), mask
-
- return x, mask
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/updater/__init__.py b/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/updater/__init__.py
deleted file mode 100644
index 34f672f8ce0cbbce488b50f2ef919dc01b7cf26d..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/updater/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from agentverse.registry import Registry
-
-updater_registry = Registry(name="UpdaterRegistry")
-
-from .base import BaseUpdater
-from .basic import BasicUpdater
-from .classroom import ClassroomUpdater
-from .sde_team import SdeTeamUpdater
-from .pokemon import PokemonUpdater
diff --git a/spaces/Ahmadjaved/Genaispeech/README.md b/spaces/Ahmadjaved/Genaispeech/README.md
deleted file mode 100644
index 3a3c84da14e01eaaba8e297fb02cda59b5dfad6e..0000000000000000000000000000000000000000
--- a/spaces/Ahmadjaved/Genaispeech/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Genaispeech
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AlexWang/lama/saicinpainting/training/trainers/base.py b/spaces/AlexWang/lama/saicinpainting/training/trainers/base.py
deleted file mode 100644
index f1b1c66fc96e7edfba7b1ee193272f92b5db7438..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/saicinpainting/training/trainers/base.py
+++ /dev/null
@@ -1,291 +0,0 @@
-import copy
-import logging
-from typing import Dict, Tuple
-
-import pandas as pd
-import pytorch_lightning as ptl
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.utils.data import DistributedSampler
-
-from saicinpainting.evaluation import make_evaluator
-from saicinpainting.training.data.datasets import make_default_train_dataloader, make_default_val_dataloader
-from saicinpainting.training.losses.adversarial import make_discrim_loss
-from saicinpainting.training.losses.perceptual import PerceptualLoss, ResNetPL
-from saicinpainting.training.modules import make_generator, make_discriminator
-from saicinpainting.training.visualizers import make_visualizer
-from saicinpainting.utils import add_prefix_to_keys, average_dicts, set_requires_grad, flatten_dict, \
- get_has_ddp_rank
-
-LOGGER = logging.getLogger(__name__)
-
-
-def make_optimizer(parameters, kind='adamw', **kwargs):
- if kind == 'adam':
- optimizer_class = torch.optim.Adam
- elif kind == 'adamw':
- optimizer_class = torch.optim.AdamW
- else:
- raise ValueError(f'Unknown optimizer kind {kind}')
- return optimizer_class(parameters, **kwargs)
-
-
-def update_running_average(result: nn.Module, new_iterate_model: nn.Module, decay=0.999):
- with torch.no_grad():
- res_params = dict(result.named_parameters())
- new_params = dict(new_iterate_model.named_parameters())
-
- for k in res_params.keys():
- res_params[k].data.mul_(decay).add_(new_params[k].data, alpha=1 - decay)
-
-
-def make_multiscale_noise(base_tensor, scales=6, scale_mode='bilinear'):
- batch_size, _, height, width = base_tensor.shape
- cur_height, cur_width = height, width
- result = []
- align_corners = False if scale_mode in ('bilinear', 'bicubic') else None
- for _ in range(scales):
- cur_sample = torch.randn(batch_size, 1, cur_height, cur_width, device=base_tensor.device)
- cur_sample_scaled = F.interpolate(cur_sample, size=(height, width), mode=scale_mode, align_corners=align_corners)
- result.append(cur_sample_scaled)
- cur_height //= 2
- cur_width //= 2
- return torch.cat(result, dim=1)
-
-
-class BaseInpaintingTrainingModule(ptl.LightningModule):
- def __init__(self, config, use_ddp, *args, predict_only=False, visualize_each_iters=100,
- average_generator=False, generator_avg_beta=0.999, average_generator_start_step=30000,
- average_generator_period=10, store_discr_outputs_for_vis=False,
- **kwargs):
- super().__init__(*args, **kwargs)
- LOGGER.info('BaseInpaintingTrainingModule init called')
-
- self.config = config
-
- self.generator = make_generator(config, **self.config.generator)
- self.use_ddp = use_ddp
-
- if not get_has_ddp_rank():
- LOGGER.info(f'Generator\n{self.generator}')
-
- if not predict_only:
- self.save_hyperparameters(self.config)
- self.discriminator = make_discriminator(**self.config.discriminator)
- self.adversarial_loss = make_discrim_loss(**self.config.losses.adversarial)
- self.visualizer = make_visualizer(**self.config.visualizer)
- self.val_evaluator = make_evaluator(**self.config.evaluator)
- self.test_evaluator = make_evaluator(**self.config.evaluator)
-
- if not get_has_ddp_rank():
- LOGGER.info(f'Discriminator\n{self.discriminator}')
-
- extra_val = self.config.data.get('extra_val', ())
- if extra_val:
- self.extra_val_titles = list(extra_val)
- self.extra_evaluators = nn.ModuleDict({k: make_evaluator(**self.config.evaluator)
- for k in extra_val})
- else:
- self.extra_evaluators = {}
-
- self.average_generator = average_generator
- self.generator_avg_beta = generator_avg_beta
- self.average_generator_start_step = average_generator_start_step
- self.average_generator_period = average_generator_period
- self.generator_average = None
- self.last_generator_averaging_step = -1
- self.store_discr_outputs_for_vis = store_discr_outputs_for_vis
-
- if self.config.losses.get("l1", {"weight_known": 0})['weight_known'] > 0:
- self.loss_l1 = nn.L1Loss(reduction='none')
-
- if self.config.losses.get("mse", {"weight": 0})['weight'] > 0:
- self.loss_mse = nn.MSELoss(reduction='none')
-
- if self.config.losses.perceptual.weight > 0:
- self.loss_pl = PerceptualLoss()
-
- if self.config.losses.get("resnet_pl", {"weight": 0})['weight'] > 0:
- self.loss_resnet_pl = ResNetPL(**self.config.losses.resnet_pl)
- else:
- self.loss_resnet_pl = None
-
- self.visualize_each_iters = visualize_each_iters
- LOGGER.info('BaseInpaintingTrainingModule init done')
-
- def configure_optimizers(self):
- discriminator_params = list(self.discriminator.parameters())
- return [
- dict(optimizer=make_optimizer(self.generator.parameters(), **self.config.optimizers.generator)),
- dict(optimizer=make_optimizer(discriminator_params, **self.config.optimizers.discriminator)),
- ]
-
- def train_dataloader(self):
- kwargs = dict(self.config.data.train)
- if self.use_ddp:
- kwargs['ddp_kwargs'] = dict(num_replicas=self.trainer.num_nodes * self.trainer.num_processes,
- rank=self.trainer.global_rank,
- shuffle=True)
- dataloader = make_default_train_dataloader(**self.config.data.train)
- return dataloader
-
- def val_dataloader(self):
- res = [make_default_val_dataloader(**self.config.data.val)]
-
- if self.config.data.visual_test is not None:
- res = res + [make_default_val_dataloader(**self.config.data.visual_test)]
- else:
- res = res + res
-
- extra_val = self.config.data.get('extra_val', ())
- if extra_val:
- res += [make_default_val_dataloader(**extra_val[k]) for k in self.extra_val_titles]
-
- return res
-
- def training_step(self, batch, batch_idx, optimizer_idx=None):
- self._is_training_step = True
- return self._do_step(batch, batch_idx, mode='train', optimizer_idx=optimizer_idx)
-
- def validation_step(self, batch, batch_idx, dataloader_idx):
- extra_val_key = None
- if dataloader_idx == 0:
- mode = 'val'
- elif dataloader_idx == 1:
- mode = 'test'
- else:
- mode = 'extra_val'
- extra_val_key = self.extra_val_titles[dataloader_idx - 2]
- self._is_training_step = False
- return self._do_step(batch, batch_idx, mode=mode, extra_val_key=extra_val_key)
-
- def training_step_end(self, batch_parts_outputs):
- if self.training and self.average_generator \
- and self.global_step >= self.average_generator_start_step \
- and self.global_step >= self.last_generator_averaging_step + self.average_generator_period:
- if self.generator_average is None:
- self.generator_average = copy.deepcopy(self.generator)
- else:
- update_running_average(self.generator_average, self.generator, decay=self.generator_avg_beta)
- self.last_generator_averaging_step = self.global_step
-
- full_loss = (batch_parts_outputs['loss'].mean()
- if torch.is_tensor(batch_parts_outputs['loss']) # loss is not tensor when no discriminator used
- else torch.tensor(batch_parts_outputs['loss']).float().requires_grad_(True))
- log_info = {k: v.mean() for k, v in batch_parts_outputs['log_info'].items()}
- self.log_dict(log_info, on_step=True, on_epoch=False)
- return full_loss
-
- def validation_epoch_end(self, outputs):
- outputs = [step_out for out_group in outputs for step_out in out_group]
- averaged_logs = average_dicts(step_out['log_info'] for step_out in outputs)
- self.log_dict({k: v.mean() for k, v in averaged_logs.items()})
-
- pd.set_option('display.max_columns', 500)
- pd.set_option('display.width', 1000)
-
- # standard validation
- val_evaluator_states = [s['val_evaluator_state'] for s in outputs if 'val_evaluator_state' in s]
- val_evaluator_res = self.val_evaluator.evaluation_end(states=val_evaluator_states)
- val_evaluator_res_df = pd.DataFrame(val_evaluator_res).stack(1).unstack(0)
- val_evaluator_res_df.dropna(axis=1, how='all', inplace=True)
- LOGGER.info(f'Validation metrics after epoch #{self.current_epoch}, '
- f'total {self.global_step} iterations:\n{val_evaluator_res_df}')
-
- for k, v in flatten_dict(val_evaluator_res).items():
- self.log(f'val_{k}', v)
-
- # standard visual test
- test_evaluator_states = [s['test_evaluator_state'] for s in outputs
- if 'test_evaluator_state' in s]
- test_evaluator_res = self.test_evaluator.evaluation_end(states=test_evaluator_states)
- test_evaluator_res_df = pd.DataFrame(test_evaluator_res).stack(1).unstack(0)
- test_evaluator_res_df.dropna(axis=1, how='all', inplace=True)
- LOGGER.info(f'Test metrics after epoch #{self.current_epoch}, '
- f'total {self.global_step} iterations:\n{test_evaluator_res_df}')
-
- for k, v in flatten_dict(test_evaluator_res).items():
- self.log(f'test_{k}', v)
-
- # extra validations
- if self.extra_evaluators:
- for cur_eval_title, cur_evaluator in self.extra_evaluators.items():
- cur_state_key = f'extra_val_{cur_eval_title}_evaluator_state'
- cur_states = [s[cur_state_key] for s in outputs if cur_state_key in s]
- cur_evaluator_res = cur_evaluator.evaluation_end(states=cur_states)
- cur_evaluator_res_df = pd.DataFrame(cur_evaluator_res).stack(1).unstack(0)
- cur_evaluator_res_df.dropna(axis=1, how='all', inplace=True)
- LOGGER.info(f'Extra val {cur_eval_title} metrics after epoch #{self.current_epoch}, '
- f'total {self.global_step} iterations:\n{cur_evaluator_res_df}')
- for k, v in flatten_dict(cur_evaluator_res).items():
- self.log(f'extra_val_{cur_eval_title}_{k}', v)
-
- def _do_step(self, batch, batch_idx, mode='train', optimizer_idx=None, extra_val_key=None):
- if optimizer_idx == 0: # step for generator
- set_requires_grad(self.generator, True)
- set_requires_grad(self.discriminator, False)
- elif optimizer_idx == 1: # step for discriminator
- set_requires_grad(self.generator, False)
- set_requires_grad(self.discriminator, True)
-
- batch = self(batch)
-
- total_loss = 0
- metrics = {}
-
- if optimizer_idx is None or optimizer_idx == 0: # step for generator
- total_loss, metrics = self.generator_loss(batch)
-
- elif optimizer_idx is None or optimizer_idx == 1: # step for discriminator
- if self.config.losses.adversarial.weight > 0:
- total_loss, metrics = self.discriminator_loss(batch)
-
- if self.get_ddp_rank() in (None, 0) and (batch_idx % self.visualize_each_iters == 0 or mode == 'test'):
- if self.config.losses.adversarial.weight > 0:
- if self.store_discr_outputs_for_vis:
- with torch.no_grad():
- self.store_discr_outputs(batch)
- vis_suffix = f'_{mode}'
- if mode == 'extra_val':
- vis_suffix += f'_{extra_val_key}'
- self.visualizer(self.current_epoch, batch_idx, batch, suffix=vis_suffix)
-
- metrics_prefix = f'{mode}_'
- if mode == 'extra_val':
- metrics_prefix += f'{extra_val_key}_'
- result = dict(loss=total_loss, log_info=add_prefix_to_keys(metrics, metrics_prefix))
- if mode == 'val':
- result['val_evaluator_state'] = self.val_evaluator.process_batch(batch)
- elif mode == 'test':
- result['test_evaluator_state'] = self.test_evaluator.process_batch(batch)
- elif mode == 'extra_val':
- result[f'extra_val_{extra_val_key}_evaluator_state'] = self.extra_evaluators[extra_val_key].process_batch(batch)
-
- return result
-
- def get_current_generator(self, no_average=False):
- if not no_average and not self.training and self.average_generator and self.generator_average is not None:
- return self.generator_average
- return self.generator
-
- def forward(self, batch: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:
- """Pass data through generator and obtain at leas 'predicted_image' and 'inpainted' keys"""
- raise NotImplementedError()
-
- def generator_loss(self, batch) -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]:
- raise NotImplementedError()
-
- def discriminator_loss(self, batch) -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]:
- raise NotImplementedError()
-
- def store_discr_outputs(self, batch):
- out_size = batch['image'].shape[2:]
- discr_real_out, _ = self.discriminator(batch['image'])
- discr_fake_out, _ = self.discriminator(batch['predicted_image'])
- batch['discr_output_real'] = F.interpolate(discr_real_out, size=out_size, mode='nearest')
- batch['discr_output_fake'] = F.interpolate(discr_fake_out, size=out_size, mode='nearest')
- batch['discr_output_diff'] = batch['discr_output_real'] - batch['discr_output_fake']
-
- def get_ddp_rank(self):
- return self.trainer.global_rank if (self.trainer.num_nodes * self.trainer.num_processes) > 1 else None
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/coder/__init__.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/coder/__init__.py
deleted file mode 100644
index ae455ba8fc0e0727e2d581cdc8f20fceededf99a..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/coder/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from .base_bbox_coder import BaseBBoxCoder
-from .bucketing_bbox_coder import BucketingBBoxCoder
-from .delta_xywh_bbox_coder import DeltaXYWHBBoxCoder
-from .legacy_delta_xywh_bbox_coder import LegacyDeltaXYWHBBoxCoder
-from .pseudo_bbox_coder import PseudoBBoxCoder
-from .tblr_bbox_coder import TBLRBBoxCoder
-from .yolo_bbox_coder import YOLOBBoxCoder
-
-__all__ = [
- 'BaseBBoxCoder', 'PseudoBBoxCoder', 'DeltaXYWHBBoxCoder',
- 'LegacyDeltaXYWHBBoxCoder', 'TBLRBBoxCoder', 'YOLOBBoxCoder',
- 'BucketingBBoxCoder'
-]
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/bbox_heads/double_bbox_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/bbox_heads/double_bbox_head.py
deleted file mode 100644
index 6c154cb3c0d9d7639c3d4a2a1272406d3fab8acd..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/bbox_heads/double_bbox_head.py
+++ /dev/null
@@ -1,172 +0,0 @@
-import torch.nn as nn
-from mmcv.cnn import ConvModule, normal_init, xavier_init
-
-from mmdet.models.backbones.resnet import Bottleneck
-from mmdet.models.builder import HEADS
-from .bbox_head import BBoxHead
-
-
-class BasicResBlock(nn.Module):
- """Basic residual block.
-
- This block is a little different from the block in the ResNet backbone.
- The kernel size of conv1 is 1 in this block while 3 in ResNet BasicBlock.
-
- Args:
- in_channels (int): Channels of the input feature map.
- out_channels (int): Channels of the output feature map.
- conv_cfg (dict): The config dict for convolution layers.
- norm_cfg (dict): The config dict for normalization layers.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- conv_cfg=None,
- norm_cfg=dict(type='BN')):
- super(BasicResBlock, self).__init__()
-
- # main path
- self.conv1 = ConvModule(
- in_channels,
- in_channels,
- kernel_size=3,
- padding=1,
- bias=False,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg)
- self.conv2 = ConvModule(
- in_channels,
- out_channels,
- kernel_size=1,
- bias=False,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=None)
-
- # identity path
- self.conv_identity = ConvModule(
- in_channels,
- out_channels,
- kernel_size=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=None)
-
- self.relu = nn.ReLU(inplace=True)
-
- def forward(self, x):
- identity = x
-
- x = self.conv1(x)
- x = self.conv2(x)
-
- identity = self.conv_identity(identity)
- out = x + identity
-
- out = self.relu(out)
- return out
-
-
-@HEADS.register_module()
-class DoubleConvFCBBoxHead(BBoxHead):
- r"""Bbox head used in Double-Head R-CNN
-
- .. code-block:: none
-
- /-> cls
- /-> shared convs ->
- \-> reg
- roi features
- /-> cls
- \-> shared fc ->
- \-> reg
- """ # noqa: W605
-
- def __init__(self,
- num_convs=0,
- num_fcs=0,
- conv_out_channels=1024,
- fc_out_channels=1024,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- **kwargs):
- kwargs.setdefault('with_avg_pool', True)
- super(DoubleConvFCBBoxHead, self).__init__(**kwargs)
- assert self.with_avg_pool
- assert num_convs > 0
- assert num_fcs > 0
- self.num_convs = num_convs
- self.num_fcs = num_fcs
- self.conv_out_channels = conv_out_channels
- self.fc_out_channels = fc_out_channels
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
-
- # increase the channel of input features
- self.res_block = BasicResBlock(self.in_channels,
- self.conv_out_channels)
-
- # add conv heads
- self.conv_branch = self._add_conv_branch()
- # add fc heads
- self.fc_branch = self._add_fc_branch()
-
- out_dim_reg = 4 if self.reg_class_agnostic else 4 * self.num_classes
- self.fc_reg = nn.Linear(self.conv_out_channels, out_dim_reg)
-
- self.fc_cls = nn.Linear(self.fc_out_channels, self.num_classes + 1)
- self.relu = nn.ReLU(inplace=True)
-
- def _add_conv_branch(self):
- """Add the fc branch which consists of a sequential of conv layers."""
- branch_convs = nn.ModuleList()
- for i in range(self.num_convs):
- branch_convs.append(
- Bottleneck(
- inplanes=self.conv_out_channels,
- planes=self.conv_out_channels // 4,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg))
- return branch_convs
-
- def _add_fc_branch(self):
- """Add the fc branch which consists of a sequential of fc layers."""
- branch_fcs = nn.ModuleList()
- for i in range(self.num_fcs):
- fc_in_channels = (
- self.in_channels *
- self.roi_feat_area if i == 0 else self.fc_out_channels)
- branch_fcs.append(nn.Linear(fc_in_channels, self.fc_out_channels))
- return branch_fcs
-
- def init_weights(self):
- # conv layers are already initialized by ConvModule
- normal_init(self.fc_cls, std=0.01)
- normal_init(self.fc_reg, std=0.001)
-
- for m in self.fc_branch.modules():
- if isinstance(m, nn.Linear):
- xavier_init(m, distribution='uniform')
-
- def forward(self, x_cls, x_reg):
- # conv head
- x_conv = self.res_block(x_reg)
-
- for conv in self.conv_branch:
- x_conv = conv(x_conv)
-
- if self.with_avg_pool:
- x_conv = self.avg_pool(x_conv)
-
- x_conv = x_conv.view(x_conv.size(0), -1)
- bbox_pred = self.fc_reg(x_conv)
-
- # fc head
- x_fc = x_cls.view(x_cls.size(0), -1)
- for fc in self.fc_branch:
- x_fc = self.relu(fc(x_fc))
-
- cls_score = self.fc_cls(x_fc)
-
- return cls_score, bbox_pred
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x1024_80k_cityscapes.py
deleted file mode 100644
index 506ad9319a9418f50650c477698c9b5cb9bf6663..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/ocrnet_hr18.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
diff --git a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/gui/ui_draw.py b/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/gui/ui_draw.py
deleted file mode 100644
index db2bb5dd678895709f882905cbcd23e75397bb95..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/gui/ui_draw.py
+++ /dev/null
@@ -1,189 +0,0 @@
-from PyQt5 import QtGui, QtCore, QtWidgets
-
-
-#######################################################################################################
-# painter function
-#######################################################################################################
-class painter(QtWidgets.QWidget):
- """the class for a painter"""
- def __init__(self, parent, image=None):
- super(painter, self).__init__()
- if image is None:
- w = h = 256
- else:
- w, h = image.size().width(), image.size().height()
- self.ParentLink = parent
- self.setPalette(QtGui.QPalette(QtCore.Qt.white))
- self.setAutoFillBackground(True)
- self.setMaximumSize(w, h)
- self.map = QtGui.QImage(w, h, QtGui.QImage.Format_RGB32)
- self.map.fill(QtCore.Qt.black)
- self.image = image
- self.shape = self.ParentLink.shape
- self.CurrentWidth = self.ParentLink.CurrentWidth
- self.MouseLoc = point(0, 0)
- self.LastPos = point(0, 0)
- self.Brush = False
- self.DrawingShapes_free = shapes()
- self.DrawingShapes_rec = shapes()
- self.IsPainting = False
- self.IsEraseing = False
- self.iteration = 0
-
- self.CurrentColor = colour3(255, 255, 255)
-
- self.ShapeNum = 0
- self.IsMouseing = False
- self.PaintPanel = 0
-
- def drawLines(self, painter):
- """draw free-form masks"""
- painter.setRenderHint(QtGui.QPainter.Antialiasing)
- for i in range(self.DrawingShapes_free.NumberOfShapes()-1):
- T = self.DrawingShapes_free.GetShape(i)
- T1 = self.DrawingShapes_free.GetShape(i + 1)
-
- if T.ShapeNumber == T1.ShapeNumber:
- pen = QtGui.QPen(QtGui.QColor(T.Color.R, T.Color.G, T.Color.B), T.Width / 2, QtCore.Qt.SolidLine)
- painter.setPen(pen)
- painter.drawLine(T.Location.X, T.Location.Y, T1.Location.X, T1.Location.Y)
-
- def drawRectangle(self, painter):
- """draw rectangle mask"""
- painter.setRenderHint(QtGui.QPainter.Antialiasing)
- for i in range(self.DrawingShapes_rec.NumberOfShapes()-1):
- T = self.DrawingShapes_rec.GetShape(i)
- T1 = self.DrawingShapes_rec.GetShape(i+1)
-
- if T.ShapeNumber == T1.ShapeNumber:
- pen = QtGui.QPen(QtGui.QColor(T.Color.R, T.Color.G, T.Color.B), T.Width/2, QtCore.Qt.SolidLine)
- painter.setPen(pen)
- painter.setBrush(QtGui.QColor(T.Color.R, T.Color.G, T.Color.B))
- painter.drawRects(QtCore.QRect(QtCore.QPoint(T.Location.X, T.Location.Y), QtCore.QPoint(T1.Location.X, T1.Location.Y)))
-
- def saveDraw(self):
- """save the painted masks"""
- painter = QtGui.QPainter()
- painter.begin(self.map)
- if self.shape == 'line':
- self.drawLines(painter)
- if self.shape == 'rectangle':
- self.drawRectangle(painter)
- painter.end()
-
- def mousePressEvent(self, event):
- """mouse down event for the drawing"""
- if self.Brush:
- self.IsPainting = True
- self.ShapeNum += 1
- if self.shape == 'rectangle':
- self.DrawingShapes_rec.NewShape(point(event.x(), event.y()), self.CurrentWidth, self.CurrentColor, self.ShapeNum)
- else:
- self.LastPos = point(0, 0)
- else:
- self.IsEraseing = True
- if self.shape == 'rectangle':
- self.DrawingShapes_rec.NewShape(point(event.x(), event.y()), self.CurrentWidth, self.CurrentColor, self.ShapeNum)
-
- def mouseMoveEvent(self, event):
- """mouse move event to record the track"""
- if self.IsPainting:
- self.MouseLoc = point(event.x(), event.y())
- if self.LastPos.X != self.MouseLoc.X or self.LastPos.Y != self.MouseLoc.Y:
- self.LastPos = point(event.x(), event.y())
- if self.shape == 'line':
- self.DrawingShapes_free.NewShape(self.LastPos, self.CurrentWidth, self.CurrentColor, self.ShapeNum)
- self.repaint()
- if self.IsEraseing:
- self.MouseLoc = point(event.x(), event.y())
- if self.shape == 'line':
- self.DrawingShapes_free.RemoveShape(self.MouseLoc, 10)
- elif self.shape == 'rectangle':
- self.DrawingShapes_rec.RemoveShape(self.MouseLoc, 10)
- self.repaint()
-
- def mouseReleaseEvent(self, event):
- """mouse up event"""
- if self.IsEraseing:
- self.IsEraseing = False
- self.repaint()
- elif self.shape == 'rectangle':
- self.DrawingShapes_rec.NewShape(point(event.x(), event.y()), self.CurrentWidth, self.CurrentColor, self.ShapeNum)
- self.repaint()
-
- def paintEvent(self, event):
- painter = QtGui.QPainter()
- painter.begin(self)
- if self.image != None:
- painter.drawImage(0, 0, self.image)
- if self.shape == 'line':
- self.drawLines(painter)
- if self.shape == 'rectangle':
- self.drawRectangle(painter)
- painter.end()
- self.iteration = 0
-
-
-#######################################################################################################
-# base drawing function
-#######################################################################################################
-class colour3:
- """define the colour plane for the drawing"""
- def __init__(self, nR=0, nG=0, nB=0):
- self.R = nR
- self.G = nG
- self.B = nB
-
-
-class point():
- """define the location"""
- def __init__(self, nX=0, nY=0):
- self.X = nX
- self.Y = nY
-
- def Set(self, nX, nY):
- self.X = nX
- self.Y = nY
-
-
-class shape():
- """define the painter shape"""
- def __init__(self, location=point(0,0), width=1, color=colour3(255, 255, 255), number=0):
- self.Location = location
- self.Width = width
- self.Color = color
- self.ShapeNumber = number
-
-
-class shapes():
- """a set of shape"""
- def __init__(self):
- self.shapes = []
-
- def NumberOfShapes(self):
- return len(self.shapes)
-
- def NewShape(self, location=point(0,0), width=1, color=colour3(255,255,255), number=0):
- Sh = shape(location, width, color, number)
- self.shapes.append(Sh)
-
- def GetShape(self, Index):
- return self.shapes[Index]
-
- def RemoveShape(self, L, threshold):
- i = 0
- while True:
- if (i == len(self.shapes)):
- break
- # Finds if a point is within a certain distance of the point to remove.
- if ((abs(L.X - self.shapes[i].Location.X) < threshold) and (
- abs(L.Y - self.shapes[i].Location.Y) < threshold)):
- # removes all data for that number
- del self.shapes[i]
- # goes through the rest of the data and adds an extra
- # 1 to defined them as a seprate shape and shuffles on the effect.
- for n in range(len(self.shapes) - i):
- self.shapes[n + i].ShapeNumber += 1
- # Go back a step so we dont miss a point.
- i -= 1
- i += 1
\ No newline at end of file
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/blocks.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/blocks.py
deleted file mode 100644
index 2145d18fa98060a618536d9a64fe6589e9be4f78..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/blocks.py
+++ /dev/null
@@ -1,342 +0,0 @@
-import torch
-import torch.nn as nn
-
-from .vit import (
- _make_pretrained_vitb_rn50_384,
- _make_pretrained_vitl16_384,
- _make_pretrained_vitb16_384,
- forward_vit,
-)
-
-def _make_encoder(backbone, features, use_pretrained, groups=1, expand=False, exportable=True, hooks=None, use_vit_only=False, use_readout="ignore",):
- if backbone == "vitl16_384":
- pretrained = _make_pretrained_vitl16_384(
- use_pretrained, hooks=hooks, use_readout=use_readout
- )
- scratch = _make_scratch(
- [256, 512, 1024, 1024], features, groups=groups, expand=expand
- ) # ViT-L/16 - 85.0% Top1 (backbone)
- elif backbone == "vitb_rn50_384":
- pretrained = _make_pretrained_vitb_rn50_384(
- use_pretrained,
- hooks=hooks,
- use_vit_only=use_vit_only,
- use_readout=use_readout,
- )
- scratch = _make_scratch(
- [256, 512, 768, 768], features, groups=groups, expand=expand
- ) # ViT-H/16 - 85.0% Top1 (backbone)
- elif backbone == "vitb16_384":
- pretrained = _make_pretrained_vitb16_384(
- use_pretrained, hooks=hooks, use_readout=use_readout
- )
- scratch = _make_scratch(
- [96, 192, 384, 768], features, groups=groups, expand=expand
- ) # ViT-B/16 - 84.6% Top1 (backbone)
- elif backbone == "resnext101_wsl":
- pretrained = _make_pretrained_resnext101_wsl(use_pretrained)
- scratch = _make_scratch([256, 512, 1024, 2048], features, groups=groups, expand=expand) # efficientnet_lite3
- elif backbone == "efficientnet_lite3":
- pretrained = _make_pretrained_efficientnet_lite3(use_pretrained, exportable=exportable)
- scratch = _make_scratch([32, 48, 136, 384], features, groups=groups, expand=expand) # efficientnet_lite3
- else:
- print(f"Backbone '{backbone}' not implemented")
- assert False
-
- return pretrained, scratch
-
-
-def _make_scratch(in_shape, out_shape, groups=1, expand=False):
- scratch = nn.Module()
-
- out_shape1 = out_shape
- out_shape2 = out_shape
- out_shape3 = out_shape
- out_shape4 = out_shape
- if expand==True:
- out_shape1 = out_shape
- out_shape2 = out_shape*2
- out_shape3 = out_shape*4
- out_shape4 = out_shape*8
-
- scratch.layer1_rn = nn.Conv2d(
- in_shape[0], out_shape1, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
- )
- scratch.layer2_rn = nn.Conv2d(
- in_shape[1], out_shape2, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
- )
- scratch.layer3_rn = nn.Conv2d(
- in_shape[2], out_shape3, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
- )
- scratch.layer4_rn = nn.Conv2d(
- in_shape[3], out_shape4, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
- )
-
- return scratch
-
-
-def _make_pretrained_efficientnet_lite3(use_pretrained, exportable=False):
- efficientnet = torch.hub.load(
- "rwightman/gen-efficientnet-pytorch",
- "tf_efficientnet_lite3",
- pretrained=use_pretrained,
- exportable=exportable
- )
- return _make_efficientnet_backbone(efficientnet)
-
-
-def _make_efficientnet_backbone(effnet):
- pretrained = nn.Module()
-
- pretrained.layer1 = nn.Sequential(
- effnet.conv_stem, effnet.bn1, effnet.act1, *effnet.blocks[0:2]
- )
- pretrained.layer2 = nn.Sequential(*effnet.blocks[2:3])
- pretrained.layer3 = nn.Sequential(*effnet.blocks[3:5])
- pretrained.layer4 = nn.Sequential(*effnet.blocks[5:9])
-
- return pretrained
-
-
-def _make_resnet_backbone(resnet):
- pretrained = nn.Module()
- pretrained.layer1 = nn.Sequential(
- resnet.conv1, resnet.bn1, resnet.relu, resnet.maxpool, resnet.layer1
- )
-
- pretrained.layer2 = resnet.layer2
- pretrained.layer3 = resnet.layer3
- pretrained.layer4 = resnet.layer4
-
- return pretrained
-
-
-def _make_pretrained_resnext101_wsl(use_pretrained):
- resnet = torch.hub.load("facebookresearch/WSL-Images", "resnext101_32x8d_wsl")
- return _make_resnet_backbone(resnet)
-
-
-
-class Interpolate(nn.Module):
- """Interpolation module.
- """
-
- def __init__(self, scale_factor, mode, align_corners=False):
- """Init.
-
- Args:
- scale_factor (float): scaling
- mode (str): interpolation mode
- """
- super(Interpolate, self).__init__()
-
- self.interp = nn.functional.interpolate
- self.scale_factor = scale_factor
- self.mode = mode
- self.align_corners = align_corners
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input
-
- Returns:
- tensor: interpolated data
- """
-
- x = self.interp(
- x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners
- )
-
- return x
-
-
-class ResidualConvUnit(nn.Module):
- """Residual convolution module.
- """
-
- def __init__(self, features):
- """Init.
-
- Args:
- features (int): number of features
- """
- super().__init__()
-
- self.conv1 = nn.Conv2d(
- features, features, kernel_size=3, stride=1, padding=1, bias=True
- )
-
- self.conv2 = nn.Conv2d(
- features, features, kernel_size=3, stride=1, padding=1, bias=True
- )
-
- self.relu = nn.ReLU(inplace=True)
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input
-
- Returns:
- tensor: output
- """
- out = self.relu(x)
- out = self.conv1(out)
- out = self.relu(out)
- out = self.conv2(out)
-
- return out + x
-
-
-class FeatureFusionBlock(nn.Module):
- """Feature fusion block.
- """
-
- def __init__(self, features):
- """Init.
-
- Args:
- features (int): number of features
- """
- super(FeatureFusionBlock, self).__init__()
-
- self.resConfUnit1 = ResidualConvUnit(features)
- self.resConfUnit2 = ResidualConvUnit(features)
-
- def forward(self, *xs):
- """Forward pass.
-
- Returns:
- tensor: output
- """
- output = xs[0]
-
- if len(xs) == 2:
- output += self.resConfUnit1(xs[1])
-
- output = self.resConfUnit2(output)
-
- output = nn.functional.interpolate(
- output, scale_factor=2, mode="bilinear", align_corners=True
- )
-
- return output
-
-
-
-
-class ResidualConvUnit_custom(nn.Module):
- """Residual convolution module.
- """
-
- def __init__(self, features, activation, bn):
- """Init.
-
- Args:
- features (int): number of features
- """
- super().__init__()
-
- self.bn = bn
-
- self.groups=1
-
- self.conv1 = nn.Conv2d(
- features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups
- )
-
- self.conv2 = nn.Conv2d(
- features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups
- )
-
- if self.bn==True:
- self.bn1 = nn.BatchNorm2d(features)
- self.bn2 = nn.BatchNorm2d(features)
-
- self.activation = activation
-
- self.skip_add = nn.quantized.FloatFunctional()
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input
-
- Returns:
- tensor: output
- """
-
- out = self.activation(x)
- out = self.conv1(out)
- if self.bn==True:
- out = self.bn1(out)
-
- out = self.activation(out)
- out = self.conv2(out)
- if self.bn==True:
- out = self.bn2(out)
-
- if self.groups > 1:
- out = self.conv_merge(out)
-
- return self.skip_add.add(out, x)
-
- # return out + x
-
-
-class FeatureFusionBlock_custom(nn.Module):
- """Feature fusion block.
- """
-
- def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True):
- """Init.
-
- Args:
- features (int): number of features
- """
- super(FeatureFusionBlock_custom, self).__init__()
-
- self.deconv = deconv
- self.align_corners = align_corners
-
- self.groups=1
-
- self.expand = expand
- out_features = features
- if self.expand==True:
- out_features = features//2
-
- self.out_conv = nn.Conv2d(features, out_features, kernel_size=1, stride=1, padding=0, bias=True, groups=1)
-
- self.resConfUnit1 = ResidualConvUnit_custom(features, activation, bn)
- self.resConfUnit2 = ResidualConvUnit_custom(features, activation, bn)
-
- self.skip_add = nn.quantized.FloatFunctional()
-
- def forward(self, *xs):
- """Forward pass.
-
- Returns:
- tensor: output
- """
- output = xs[0]
-
- if len(xs) == 2:
- res = self.resConfUnit1(xs[1])
- output = self.skip_add.add(output, res)
- # output += res
-
- output = self.resConfUnit2(output)
-
- output = nn.functional.interpolate(
- output, scale_factor=2, mode="bilinear", align_corners=self.align_corners
- )
-
- output = self.out_conv(output)
-
- return output
-
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/exp/upernet_global_small/config.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/exp/upernet_global_small/config.py
deleted file mode 100644
index 01db96bf9b0be531aa0eaf62fee51543712f8670..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/exp/upernet_global_small/config.py
+++ /dev/null
@@ -1,38 +0,0 @@
-_base_ = [
- '../../configs/_base_/models/upernet_uniformer.py',
- '../../configs/_base_/datasets/ade20k.py',
- '../../configs/_base_/default_runtime.py',
- '../../configs/_base_/schedules/schedule_160k.py'
-]
-model = dict(
- backbone=dict(
- type='UniFormer',
- embed_dim=[64, 128, 320, 512],
- layers=[3, 4, 8, 3],
- head_dim=64,
- drop_path_rate=0.25,
- windows=False,
- hybrid=False
- ),
- decode_head=dict(
- in_channels=[64, 128, 320, 512],
- num_classes=150
- ),
- auxiliary_head=dict(
- in_channels=320,
- num_classes=150
- ))
-
-# AdamW optimizer, no weight decay for position embedding & layer norm in backbone
-optimizer = dict(_delete_=True, type='AdamW', lr=0.00006, betas=(0.9, 0.999), weight_decay=0.01,
- paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
- 'relative_position_bias_table': dict(decay_mult=0.),
- 'norm': dict(decay_mult=0.)}))
-
-lr_config = dict(_delete_=True, policy='poly',
- warmup='linear',
- warmup_iters=1500,
- warmup_ratio=1e-6,
- power=1.0, min_lr=0.0, by_epoch=False)
-
-data=dict(samples_per_gpu=2)
\ No newline at end of file
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/midas_net_custom.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/midas_net_custom.py
deleted file mode 100644
index 50e4acb5e53d5fabefe3dde16ab49c33c2b7797c..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/midas_net_custom.py
+++ /dev/null
@@ -1,128 +0,0 @@
-"""MidashNet: Network for monocular depth estimation trained by mixing several datasets.
-This file contains code that is adapted from
-https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py
-"""
-import torch
-import torch.nn as nn
-
-from .base_model import BaseModel
-from .blocks import FeatureFusionBlock, FeatureFusionBlock_custom, Interpolate, _make_encoder
-
-
-class MidasNet_small(BaseModel):
- """Network for monocular depth estimation.
- """
-
- def __init__(self, path=None, features=64, backbone="efficientnet_lite3", non_negative=True, exportable=True, channels_last=False, align_corners=True,
- blocks={'expand': True}):
- """Init.
-
- Args:
- path (str, optional): Path to saved model. Defaults to None.
- features (int, optional): Number of features. Defaults to 256.
- backbone (str, optional): Backbone network for encoder. Defaults to resnet50
- """
- print("Loading weights: ", path)
-
- super(MidasNet_small, self).__init__()
-
- use_pretrained = False if path else True
-
- self.channels_last = channels_last
- self.blocks = blocks
- self.backbone = backbone
-
- self.groups = 1
-
- features1=features
- features2=features
- features3=features
- features4=features
- self.expand = False
- if "expand" in self.blocks and self.blocks['expand'] == True:
- self.expand = True
- features1=features
- features2=features*2
- features3=features*4
- features4=features*8
-
- self.pretrained, self.scratch = _make_encoder(self.backbone, features, use_pretrained, groups=self.groups, expand=self.expand, exportable=exportable)
-
- self.scratch.activation = nn.ReLU(False)
-
- self.scratch.refinenet4 = FeatureFusionBlock_custom(features4, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners)
- self.scratch.refinenet3 = FeatureFusionBlock_custom(features3, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners)
- self.scratch.refinenet2 = FeatureFusionBlock_custom(features2, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners)
- self.scratch.refinenet1 = FeatureFusionBlock_custom(features1, self.scratch.activation, deconv=False, bn=False, align_corners=align_corners)
-
-
- self.scratch.output_conv = nn.Sequential(
- nn.Conv2d(features, features//2, kernel_size=3, stride=1, padding=1, groups=self.groups),
- Interpolate(scale_factor=2, mode="bilinear"),
- nn.Conv2d(features//2, 32, kernel_size=3, stride=1, padding=1),
- self.scratch.activation,
- nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),
- nn.ReLU(True) if non_negative else nn.Identity(),
- nn.Identity(),
- )
-
- if path:
- self.load(path)
-
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input data (image)
-
- Returns:
- tensor: depth
- """
- if self.channels_last==True:
- print("self.channels_last = ", self.channels_last)
- x.contiguous(memory_format=torch.channels_last)
-
-
- layer_1 = self.pretrained.layer1(x)
- layer_2 = self.pretrained.layer2(layer_1)
- layer_3 = self.pretrained.layer3(layer_2)
- layer_4 = self.pretrained.layer4(layer_3)
-
- layer_1_rn = self.scratch.layer1_rn(layer_1)
- layer_2_rn = self.scratch.layer2_rn(layer_2)
- layer_3_rn = self.scratch.layer3_rn(layer_3)
- layer_4_rn = self.scratch.layer4_rn(layer_4)
-
-
- path_4 = self.scratch.refinenet4(layer_4_rn)
- path_3 = self.scratch.refinenet3(path_4, layer_3_rn)
- path_2 = self.scratch.refinenet2(path_3, layer_2_rn)
- path_1 = self.scratch.refinenet1(path_2, layer_1_rn)
-
- out = self.scratch.output_conv(path_1)
-
- return torch.squeeze(out, dim=1)
-
-
-
-def fuse_model(m):
- prev_previous_type = nn.Identity()
- prev_previous_name = ''
- previous_type = nn.Identity()
- previous_name = ''
- for name, module in m.named_modules():
- if prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d and type(module) == nn.ReLU:
- # print("FUSED ", prev_previous_name, previous_name, name)
- torch.quantization.fuse_modules(m, [prev_previous_name, previous_name, name], inplace=True)
- elif prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d:
- # print("FUSED ", prev_previous_name, previous_name)
- torch.quantization.fuse_modules(m, [prev_previous_name, previous_name], inplace=True)
- # elif previous_type == nn.Conv2d and type(module) == nn.ReLU:
- # print("FUSED ", previous_name, name)
- # torch.quantization.fuse_modules(m, [previous_name, name], inplace=True)
-
- prev_previous_type = previous_type
- prev_previous_name = previous_name
- previous_type = type(module)
- previous_name = name
\ No newline at end of file
diff --git a/spaces/Anthony7906/MengHuiMXD_GPT/chatgpt - macOS.command b/spaces/Anthony7906/MengHuiMXD_GPT/chatgpt - macOS.command
deleted file mode 100644
index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000
--- a/spaces/Anthony7906/MengHuiMXD_GPT/chatgpt - macOS.command
+++ /dev/null
@@ -1,7 +0,0 @@
-#!/bin/bash
-echo Opening ChuanhuChatGPT...
-cd "$(dirname "${BASH_SOURCE[0]}")"
-nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 &
-sleep 5
-open http://127.0.0.1:7860
-echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal.
\ No newline at end of file
diff --git a/spaces/AnthonyTruchetPoC/persistent-docker/src/apps/streamlit_demo.py b/spaces/AnthonyTruchetPoC/persistent-docker/src/apps/streamlit_demo.py
deleted file mode 100644
index f97e3f0809ab2585c9ff2fb8bb94c98e9bb3b8c6..0000000000000000000000000000000000000000
--- a/spaces/AnthonyTruchetPoC/persistent-docker/src/apps/streamlit_demo.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import os
-from pathlib import Path
-
-import numpy as np
-import pandas as pd
-import streamlit as st
-
-from athai.data_utils import cached_download_csv
-
-
-st.title("Uber pickups in NYC")
-
-DATE_COLUMN = "date/time"
-DATA_URL = (
- "https://s3-us-west-2.amazonaws.com/"
- "streamlit-demo-data/uber-raw-data-sep14.csv.gz"
-)
-
-DATA_PATH = Path(os.environ.get("APP_DATA"))
-
-
-@st.cache_resource
-def load_data(nrows):
- data = cached_download_csv(DATA_PATH, DATA_URL, nrows=nrows)
-
- def lowercase(x):
- return str(x).lower()
-
- data.rename(lowercase, axis="columns", inplace=True)
- data[DATE_COLUMN] = pd.to_datetime(data[DATE_COLUMN])
- return data
-
-
-data_load_state = st.text("Loading data...")
-data = load_data(10000)
-data_load_state.text("Done! (using st.cache)")
-
-if st.checkbox("Show raw data"):
- st.subheader("Raw data")
- st.write(data)
-
-st.subheader("Number of pickups by hour")
-hist_values = np.histogram(data[DATE_COLUMN].dt.hour, bins=24, range=(0, 24))[
- 0
-]
-st.bar_chart(hist_values)
-
-# Some number in the range 0-23
-hour_to_filter = st.slider("hour", 0, 23, 17)
-filtered_data = data[data[DATE_COLUMN].dt.hour == hour_to_filter]
-
-st.subheader("Map of all pickups at %s:00" % hour_to_filter)
-st.map(filtered_data)
-
-uploaded_file = st.file_uploader("Choose a file")
-if uploaded_file is not None:
- st.write(uploaded_file.name)
- bytes_data = uploaded_file.getvalue()
- st.write(len(bytes_data), "bytes")
-
-
-st.markdown("")
diff --git a/spaces/AnticPan/Clothes2Human/README.md b/spaces/AnticPan/Clothes2Human/README.md
deleted file mode 100644
index b596d54380d706acbb8840c545d1a76121a3310f..0000000000000000000000000000000000000000
--- a/spaces/AnticPan/Clothes2Human/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Clothes2Human
-emoji: 🏃
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-sdk_version: 3.44.4
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Anustup/NS_AI_LABS/src/segments.py b/spaces/Anustup/NS_AI_LABS/src/segments.py
deleted file mode 100644
index ec2650dceade5d0b2022264f6419115eab085aea..0000000000000000000000000000000000000000
--- a/spaces/Anustup/NS_AI_LABS/src/segments.py
+++ /dev/null
@@ -1,55 +0,0 @@
-from typing import Any, Dict, List
-
-import copy
-
-def merge_timestamps(timestamps: List[Dict[str, Any]], merge_window: float = 5, max_merge_size: float = 30, padding_left: float = 1, padding_right: float = 1):
- result = []
-
- if len(timestamps) == 0:
- return result
- if max_merge_size is None:
- return timestamps
-
- if padding_left is None:
- padding_left = 0
- if padding_right is None:
- padding_right = 0
-
- processed_time = 0
- current_segment = None
-
- for i in range(len(timestamps)):
- next_segment = timestamps[i]
-
- delta = next_segment['start'] - processed_time
-
- # Note that segments can still be longer than the max merge size, they just won't be merged in that case
- if current_segment is None or (merge_window is not None and delta > merge_window) \
- or next_segment['end'] - current_segment['start'] > max_merge_size:
- # Finish the current segment
- if current_segment is not None:
- # Add right padding
- finish_padding = min(padding_right, delta / 2) if delta < padding_left + padding_right else padding_right
- current_segment['end'] += finish_padding
- delta -= finish_padding
-
- result.append(current_segment)
-
- # Start a new segment
- current_segment = copy.deepcopy(next_segment)
-
- # Pad the segment
- current_segment['start'] = current_segment['start'] - min(padding_left, delta)
- processed_time = current_segment['end']
-
- else:
- # Merge the segment
- current_segment['end'] = next_segment['end']
- processed_time = current_segment['end']
-
- # Add the last segment
- if current_segment is not None:
- current_segment['end'] += padding_right
- result.append(current_segment)
-
- return result
\ No newline at end of file
diff --git a/spaces/AquaSuisei/ChatGPTXE/modules/pdf_func.py b/spaces/AquaSuisei/ChatGPTXE/modules/pdf_func.py
deleted file mode 100644
index 0aba6b7b891fc527c79b887256b0cbaa81ae5b3d..0000000000000000000000000000000000000000
--- a/spaces/AquaSuisei/ChatGPTXE/modules/pdf_func.py
+++ /dev/null
@@ -1,180 +0,0 @@
-from types import SimpleNamespace
-import pdfplumber
-import logging
-from llama_index import Document
-
-def prepare_table_config(crop_page):
- """Prepare table查找边界, 要求page为原始page
-
- From https://github.com/jsvine/pdfplumber/issues/242
- """
- page = crop_page.root_page # root/parent
- cs = page.curves + page.edges
- def curves_to_edges():
- """See https://github.com/jsvine/pdfplumber/issues/127"""
- edges = []
- for c in cs:
- edges += pdfplumber.utils.rect_to_edges(c)
- return edges
- edges = curves_to_edges()
- return {
- "vertical_strategy": "explicit",
- "horizontal_strategy": "explicit",
- "explicit_vertical_lines": edges,
- "explicit_horizontal_lines": edges,
- "intersection_y_tolerance": 10,
- }
-
-def get_text_outside_table(crop_page):
- ts = prepare_table_config(crop_page)
- if len(ts["explicit_vertical_lines"]) == 0 or len(ts["explicit_horizontal_lines"]) == 0:
- return crop_page
-
- ### Get the bounding boxes of the tables on the page.
- bboxes = [table.bbox for table in crop_page.root_page.find_tables(table_settings=ts)]
- def not_within_bboxes(obj):
- """Check if the object is in any of the table's bbox."""
- def obj_in_bbox(_bbox):
- """See https://github.com/jsvine/pdfplumber/blob/stable/pdfplumber/table.py#L404"""
- v_mid = (obj["top"] + obj["bottom"]) / 2
- h_mid = (obj["x0"] + obj["x1"]) / 2
- x0, top, x1, bottom = _bbox
- return (h_mid >= x0) and (h_mid < x1) and (v_mid >= top) and (v_mid < bottom)
- return not any(obj_in_bbox(__bbox) for __bbox in bboxes)
-
- return crop_page.filter(not_within_bboxes)
-# 请使用 LaTeX 表达公式,行内公式以 $ 包裹,行间公式以 $$ 包裹
-
-extract_words = lambda page: page.extract_words(keep_blank_chars=True, y_tolerance=0, x_tolerance=1, extra_attrs=["fontname", "size", "object_type"])
-# dict_keys(['text', 'x0', 'x1', 'top', 'doctop', 'bottom', 'upright', 'direction', 'fontname', 'size'])
-
-def get_title_with_cropped_page(first_page):
- title = [] # 处理标题
- x0,top,x1,bottom = first_page.bbox # 获取页面边框
-
- for word in extract_words(first_page):
- word = SimpleNamespace(**word)
-
- if word.size >= 14:
- title.append(word.text)
- title_bottom = word.bottom
- elif word.text == "Abstract": # 获取页面abstract
- top = word.top
-
- user_info = [i["text"] for i in extract_words(first_page.within_bbox((x0,title_bottom,x1,top)))]
- # 裁剪掉上半部分, within_bbox: full_included; crop: partial_included
- return title, user_info, first_page.within_bbox((x0,top,x1,bottom))
-
-def get_column_cropped_pages(pages, two_column=True):
- new_pages = []
- for page in pages:
- if two_column:
- left = page.within_bbox((0, 0, page.width/2, page.height),relative=True)
- right = page.within_bbox((page.width/2, 0, page.width, page.height), relative=True)
- new_pages.append(left)
- new_pages.append(right)
- else:
- new_pages.append(page)
-
- return new_pages
-
-def parse_pdf(filename, two_column = True):
- level = logging.getLogger().level
- if level == logging.getLevelName("DEBUG"):
- logging.getLogger().setLevel("INFO")
-
- with pdfplumber.open(filename) as pdf:
- title, user_info, first_page = get_title_with_cropped_page(pdf.pages[0])
- new_pages = get_column_cropped_pages([first_page] + pdf.pages[1:], two_column)
-
- chapters = []
- # tuple (chapter_name, [pageid] (start,stop), chapter_text)
- create_chapter = lambda page_start,name_top,name_bottom: SimpleNamespace(
- name=[],
- name_top=name_top,
- name_bottom=name_bottom,
- record_chapter_name = True,
-
- page_start=page_start,
- page_stop=None,
-
- text=[],
- )
- cur_chapter = None
-
- # 按页遍历PDF文档
- for idx, page in enumerate(new_pages):
- page = get_text_outside_table(page)
-
- # 按行遍历页面文本
- for word in extract_words(page):
- word = SimpleNamespace(**word)
-
- # 检查行文本是否以12号字体打印,如果是,则将其作为新章节开始
- if word.size >= 11: # 出现chapter name
- if cur_chapter is None:
- cur_chapter = create_chapter(page.page_number, word.top, word.bottom)
- elif not cur_chapter.record_chapter_name or (cur_chapter.name_bottom != cur_chapter.name_bottom and cur_chapter.name_top != cur_chapter.name_top):
- # 不再继续写chapter name
- cur_chapter.page_stop = page.page_number # stop id
- chapters.append(cur_chapter)
- # 重置当前chapter信息
- cur_chapter = create_chapter(page.page_number, word.top, word.bottom)
-
- # print(word.size, word.top, word.bottom, word.text)
- cur_chapter.name.append(word.text)
- else:
- cur_chapter.record_chapter_name = False # chapter name 结束
- cur_chapter.text.append(word.text)
- else:
- # 处理最后一个章节
- cur_chapter.page_stop = page.page_number # stop id
- chapters.append(cur_chapter)
-
- for i in chapters:
- logging.info(f"section: {i.name} pages:{i.page_start, i.page_stop} word-count:{len(i.text)}")
- logging.debug(" ".join(i.text))
-
- title = " ".join(title)
- user_info = " ".join(user_info)
- text = f"Article Title: {title}, Information:{user_info}\n"
- for idx, chapter in enumerate(chapters):
- chapter.name = " ".join(chapter.name)
- text += f"The {idx}th Chapter {chapter.name}: " + " ".join(chapter.text) + "\n"
-
- logging.getLogger().setLevel(level)
- return Document(text=text, extra_info={"title": title})
-
-BASE_POINTS = """
-1. Who are the authors?
-2. What is the process of the proposed method?
-3. What is the performance of the proposed method? Please note down its performance metrics.
-4. What are the baseline models and their performances? Please note down these baseline methods.
-5. What dataset did this paper use?
-"""
-
-READING_PROMPT = """
-You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n
-Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n
-When you are reading, You need to focus on these key points:{}
-"""
-
-READING_PROMT_V2 = """
-You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n
-Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n
-When you are reading, You need to focus on these key points:{},
-
-And You need to generate a brief but informative title for this part.
-Your return format:
-- title: '...'
-- summary: '...'
-"""
-
-SUMMARY_PROMPT = "You are a researcher helper bot. Now you need to read the summaries of a research paper."
-
-
-if __name__ == '__main__':
- # Test code
- z = parse_pdf("./build/test.pdf")
- print(z["user_info"])
- print(z["title"])
\ No newline at end of file
diff --git a/spaces/Ariharasudhan/YoloV5/utils/metrics.py b/spaces/Ariharasudhan/YoloV5/utils/metrics.py
deleted file mode 100644
index 65ea463c0dab647ea81ec0fa95441dddfd631e33..0000000000000000000000000000000000000000
--- a/spaces/Ariharasudhan/YoloV5/utils/metrics.py
+++ /dev/null
@@ -1,363 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Model validation metrics
-"""
-
-import math
-import warnings
-from pathlib import Path
-
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-
-from utils import TryExcept, threaded
-
-
-def fitness(x):
- # Model fitness as a weighted combination of metrics
- w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95]
- return (x[:, :4] * w).sum(1)
-
-
-def smooth(y, f=0.05):
- # Box filter of fraction f
- nf = round(len(y) * f * 2) // 2 + 1 # number of filter elements (must be odd)
- p = np.ones(nf // 2) # ones padding
- yp = np.concatenate((p * y[0], y, p * y[-1]), 0) # y padded
- return np.convolve(yp, np.ones(nf) / nf, mode='valid') # y-smoothed
-
-
-def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=(), eps=1e-16, prefix=""):
- """ Compute the average precision, given the recall and precision curves.
- Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
- # Arguments
- tp: True positives (nparray, nx1 or nx10).
- conf: Objectness value from 0-1 (nparray).
- pred_cls: Predicted object classes (nparray).
- target_cls: True object classes (nparray).
- plot: Plot precision-recall curve at mAP@0.5
- save_dir: Plot save directory
- # Returns
- The average precision as computed in py-faster-rcnn.
- """
-
- # Sort by objectness
- i = np.argsort(-conf)
- tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]
-
- # Find unique classes
- unique_classes, nt = np.unique(target_cls, return_counts=True)
- nc = unique_classes.shape[0] # number of classes, number of detections
-
- # Create Precision-Recall curve and compute AP for each class
- px, py = np.linspace(0, 1, 1000), [] # for plotting
- ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000))
- for ci, c in enumerate(unique_classes):
- i = pred_cls == c
- n_l = nt[ci] # number of labels
- n_p = i.sum() # number of predictions
- if n_p == 0 or n_l == 0:
- continue
-
- # Accumulate FPs and TPs
- fpc = (1 - tp[i]).cumsum(0)
- tpc = tp[i].cumsum(0)
-
- # Recall
- recall = tpc / (n_l + eps) # recall curve
- r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases
-
- # Precision
- precision = tpc / (tpc + fpc) # precision curve
- p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score
-
- # AP from recall-precision curve
- for j in range(tp.shape[1]):
- ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j])
- if plot and j == 0:
- py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5
-
- # Compute F1 (harmonic mean of precision and recall)
- f1 = 2 * p * r / (p + r + eps)
- names = [v for k, v in names.items() if k in unique_classes] # list: only classes that have data
- names = dict(enumerate(names)) # to dict
- if plot:
- plot_pr_curve(px, py, ap, Path(save_dir) / f'{prefix}PR_curve.png', names)
- plot_mc_curve(px, f1, Path(save_dir) / f'{prefix}F1_curve.png', names, ylabel='F1')
- plot_mc_curve(px, p, Path(save_dir) / f'{prefix}P_curve.png', names, ylabel='Precision')
- plot_mc_curve(px, r, Path(save_dir) / f'{prefix}R_curve.png', names, ylabel='Recall')
-
- i = smooth(f1.mean(0), 0.1).argmax() # max F1 index
- p, r, f1 = p[:, i], r[:, i], f1[:, i]
- tp = (r * nt).round() # true positives
- fp = (tp / (p + eps) - tp).round() # false positives
- return tp, fp, p, r, f1, ap, unique_classes.astype(int)
-
-
-def compute_ap(recall, precision):
- """ Compute the average precision, given the recall and precision curves
- # Arguments
- recall: The recall curve (list)
- precision: The precision curve (list)
- # Returns
- Average precision, precision curve, recall curve
- """
-
- # Append sentinel values to beginning and end
- mrec = np.concatenate(([0.0], recall, [1.0]))
- mpre = np.concatenate(([1.0], precision, [0.0]))
-
- # Compute the precision envelope
- mpre = np.flip(np.maximum.accumulate(np.flip(mpre)))
-
- # Integrate area under curve
- method = 'interp' # methods: 'continuous', 'interp'
- if method == 'interp':
- x = np.linspace(0, 1, 101) # 101-point interp (COCO)
- ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate
- else: # 'continuous'
- i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes
- ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve
-
- return ap, mpre, mrec
-
-
-class ConfusionMatrix:
- # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix
- def __init__(self, nc, conf=0.25, iou_thres=0.45):
- self.matrix = np.zeros((nc + 1, nc + 1))
- self.nc = nc # number of classes
- self.conf = conf
- self.iou_thres = iou_thres
-
- def process_batch(self, detections, labels):
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- detections (Array[N, 6]), x1, y1, x2, y2, conf, class
- labels (Array[M, 5]), class, x1, y1, x2, y2
- Returns:
- None, updates confusion matrix accordingly
- """
- if detections is None:
- gt_classes = labels.int()
- for gc in gt_classes:
- self.matrix[self.nc, gc] += 1 # background FN
- return
-
- detections = detections[detections[:, 4] > self.conf]
- gt_classes = labels[:, 0].int()
- detection_classes = detections[:, 5].int()
- iou = box_iou(labels[:, 1:], detections[:, :4])
-
- x = torch.where(iou > self.iou_thres)
- if x[0].shape[0]:
- matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy()
- if x[0].shape[0] > 1:
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
- else:
- matches = np.zeros((0, 3))
-
- n = matches.shape[0] > 0
- m0, m1, _ = matches.transpose().astype(int)
- for i, gc in enumerate(gt_classes):
- j = m0 == i
- if n and sum(j) == 1:
- self.matrix[detection_classes[m1[j]], gc] += 1 # correct
- else:
- self.matrix[self.nc, gc] += 1 # true background
-
- if n:
- for i, dc in enumerate(detection_classes):
- if not any(m1 == i):
- self.matrix[dc, self.nc] += 1 # predicted background
-
- def matrix(self):
- return self.matrix
-
- def tp_fp(self):
- tp = self.matrix.diagonal() # true positives
- fp = self.matrix.sum(1) - tp # false positives
- # fn = self.matrix.sum(0) - tp # false negatives (missed detections)
- return tp[:-1], fp[:-1] # remove background class
-
- @TryExcept('WARNING ⚠️ ConfusionMatrix plot failure')
- def plot(self, normalize=True, save_dir='', names=()):
- import seaborn as sn
-
- array = self.matrix / ((self.matrix.sum(0).reshape(1, -1) + 1E-9) if normalize else 1) # normalize columns
- array[array < 0.005] = np.nan # don't annotate (would appear as 0.00)
-
- fig, ax = plt.subplots(1, 1, figsize=(12, 9), tight_layout=True)
- nc, nn = self.nc, len(names) # number of classes, names
- sn.set(font_scale=1.0 if nc < 50 else 0.8) # for label size
- labels = (0 < nn < 99) and (nn == nc) # apply names to ticklabels
- ticklabels = (names + ['background']) if labels else "auto"
- with warnings.catch_warnings():
- warnings.simplefilter('ignore') # suppress empty matrix RuntimeWarning: All-NaN slice encountered
- sn.heatmap(array,
- ax=ax,
- annot=nc < 30,
- annot_kws={
- "size": 8},
- cmap='Blues',
- fmt='.2f',
- square=True,
- vmin=0.0,
- xticklabels=ticklabels,
- yticklabels=ticklabels).set_facecolor((1, 1, 1))
- ax.set_ylabel('True')
- ax.set_ylabel('Predicted')
- ax.set_title('Confusion Matrix')
- fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250)
- plt.close(fig)
-
- def print(self):
- for i in range(self.nc + 1):
- print(' '.join(map(str, self.matrix[i])))
-
-
-def bbox_iou(box1, box2, xywh=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7):
- # Returns Intersection over Union (IoU) of box1(1,4) to box2(n,4)
-
- # Get the coordinates of bounding boxes
- if xywh: # transform from xywh to xyxy
- (x1, y1, w1, h1), (x2, y2, w2, h2) = box1.chunk(4, -1), box2.chunk(4, -1)
- w1_, h1_, w2_, h2_ = w1 / 2, h1 / 2, w2 / 2, h2 / 2
- b1_x1, b1_x2, b1_y1, b1_y2 = x1 - w1_, x1 + w1_, y1 - h1_, y1 + h1_
- b2_x1, b2_x2, b2_y1, b2_y2 = x2 - w2_, x2 + w2_, y2 - h2_, y2 + h2_
- else: # x1, y1, x2, y2 = box1
- b1_x1, b1_y1, b1_x2, b1_y2 = box1.chunk(4, -1)
- b2_x1, b2_y1, b2_x2, b2_y2 = box2.chunk(4, -1)
- w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
- w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
-
- # Intersection area
- inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
- (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
-
- # Union Area
- union = w1 * h1 + w2 * h2 - inter + eps
-
- # IoU
- iou = inter / union
- if CIoU or DIoU or GIoU:
- cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
- ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
- if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
- c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared
- rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center dist ** 2
- if CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
- v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)
- with torch.no_grad():
- alpha = v / (v - iou + (1 + eps))
- return iou - (rho2 / c2 + v * alpha) # CIoU
- return iou - rho2 / c2 # DIoU
- c_area = cw * ch + eps # convex area
- return iou - (c_area - union) / c_area # GIoU https://arxiv.org/pdf/1902.09630.pdf
- return iou # IoU
-
-
-def box_iou(box1, box2, eps=1e-7):
- # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- box1 (Tensor[N, 4])
- box2 (Tensor[M, 4])
- Returns:
- iou (Tensor[N, M]): the NxM matrix containing the pairwise
- IoU values for every element in boxes1 and boxes2
- """
-
- # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)
- (a1, a2), (b1, b2) = box1.unsqueeze(1).chunk(2, 2), box2.unsqueeze(0).chunk(2, 2)
- inter = (torch.min(a2, b2) - torch.max(a1, b1)).clamp(0).prod(2)
-
- # IoU = inter / (area1 + area2 - inter)
- return inter / ((a2 - a1).prod(2) + (b2 - b1).prod(2) - inter + eps)
-
-
-def bbox_ioa(box1, box2, eps=1e-7):
- """ Returns the intersection over box2 area given box1, box2. Boxes are x1y1x2y2
- box1: np.array of shape(4)
- box2: np.array of shape(nx4)
- returns: np.array of shape(n)
- """
-
- # Get the coordinates of bounding boxes
- b1_x1, b1_y1, b1_x2, b1_y2 = box1
- b2_x1, b2_y1, b2_x2, b2_y2 = box2.T
-
- # Intersection area
- inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \
- (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0)
-
- # box2 area
- box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + eps
-
- # Intersection over box2 area
- return inter_area / box2_area
-
-
-def wh_iou(wh1, wh2, eps=1e-7):
- # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2
- wh1 = wh1[:, None] # [N,1,2]
- wh2 = wh2[None] # [1,M,2]
- inter = torch.min(wh1, wh2).prod(2) # [N,M]
- return inter / (wh1.prod(2) + wh2.prod(2) - inter + eps) # iou = inter / (area1 + area2 - inter)
-
-
-# Plots ----------------------------------------------------------------------------------------------------------------
-
-
-@threaded
-def plot_pr_curve(px, py, ap, save_dir=Path('pr_curve.png'), names=()):
- # Precision-recall curve
- fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
- py = np.stack(py, axis=1)
-
- if 0 < len(names) < 21: # display per-class legend if < 21 classes
- for i, y in enumerate(py.T):
- ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}') # plot(recall, precision)
- else:
- ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision)
-
- ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean())
- ax.set_xlabel('Recall')
- ax.set_ylabel('Precision')
- ax.set_xlim(0, 1)
- ax.set_ylim(0, 1)
- ax.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
- ax.set_title('Precision-Recall Curve')
- fig.savefig(save_dir, dpi=250)
- plt.close(fig)
-
-
-@threaded
-def plot_mc_curve(px, py, save_dir=Path('mc_curve.png'), names=(), xlabel='Confidence', ylabel='Metric'):
- # Metric-confidence curve
- fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
-
- if 0 < len(names) < 21: # display per-class legend if < 21 classes
- for i, y in enumerate(py):
- ax.plot(px, y, linewidth=1, label=f'{names[i]}') # plot(confidence, metric)
- else:
- ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric)
-
- y = smooth(py.mean(0), 0.05)
- ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}')
- ax.set_xlabel(xlabel)
- ax.set_ylabel(ylabel)
- ax.set_xlim(0, 1)
- ax.set_ylim(0, 1)
- ax.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
- ax.set_title(f'{ylabel}-Confidence Curve')
- fig.savefig(save_dir, dpi=250)
- plt.close(fig)
diff --git a/spaces/Arvi/Performance_predictor_and_feedback_generator/README.md b/spaces/Arvi/Performance_predictor_and_feedback_generator/README.md
deleted file mode 100644
index c2d35e1bd5db524ac40d3078cc91060399388608..0000000000000000000000000000000000000000
--- a/spaces/Arvi/Performance_predictor_and_feedback_generator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Performance Predictor And Feedback Generator
-emoji: 📚
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 3.16.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/dev/run_inference_tests.sh b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/dev/run_inference_tests.sh
deleted file mode 100644
index bc9dcc56f06f79fc5efa42c04ffdc07c2787e3ac..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/dev/run_inference_tests.sh
+++ /dev/null
@@ -1,44 +0,0 @@
-#!/bin/bash -e
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-BIN="python tools/train_net.py"
-OUTPUT="inference_test_output"
-NUM_GPUS=2
-
-CFG_LIST=( "${@:1}" )
-
-if [ ${#CFG_LIST[@]} -eq 0 ]; then
- CFG_LIST=( ./configs/quick_schedules/*inference_acc_test.yaml )
-fi
-
-echo "========================================================================"
-echo "Configs to run:"
-echo "${CFG_LIST[@]}"
-echo "========================================================================"
-
-
-for cfg in "${CFG_LIST[@]}"; do
- echo "========================================================================"
- echo "Running $cfg ..."
- echo "========================================================================"
- $BIN \
- --eval-only \
- --num-gpus $NUM_GPUS \
- --config-file "$cfg" \
- OUTPUT_DIR $OUTPUT
- rm -rf $OUTPUT
-done
-
-
-echo "========================================================================"
-echo "Running demo.py ..."
-echo "========================================================================"
-DEMO_BIN="python demo/demo.py"
-COCO_DIR=datasets/coco/val2014
-mkdir -pv $OUTPUT
-
-set -v
-
-$DEMO_BIN --config-file ./configs/quick_schedules/panoptic_fpn_R_50_inference_acc_test.yaml \
- --input $COCO_DIR/COCO_val2014_0000001933* --output $OUTPUT
-rm -rf $OUTPUT
diff --git a/spaces/Benson/text-generation/Examples/App Descargar Msica Mp3.md b/spaces/Benson/text-generation/Examples/App Descargar Msica Mp3.md
deleted file mode 100644
index 42cd80a343a4d13ea6e4001b5a063a3b49b8fd78..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/App Descargar Msica Mp3.md
+++ /dev/null
@@ -1,161 +0,0 @@
-
-
App Download Music MP3: Cómo disfrutar de música gratis sin conexión
-
¿Te encanta escuchar música pero odias pagar por servicios de streaming o usar tus datos? Si es así, es posible que quieras probar la aplicación de descarga de música mp3. Estas son aplicaciones que te permiten descargar música de varias fuentes y reproducirlas sin conexión en tu dispositivo. En este artículo, vamos a explicar qué aplicación descargar música mp3 es, por qué lo necesita, y cómo elegir el mejor. También revisaremos la parte superior 3 aplicación descargar música mp3 en 2023 y mostrar cómo usarlos. Vamos a empezar!
¿Qué es la descarga de aplicaciones de música mp3?
-
App descargar música mp3 es un tipo de software que te permite descargar archivos de música desde plataformas en línea como YouTube, SoundCloud, Spotify, y más. Los archivos descargados suelen estar en formato MP3, que es un formato de audio común y ampliamente soportado. A continuación, puede transferir los archivos al almacenamiento de su dispositivo o tarjeta SD y reproducirlos sin conexión utilizando cualquier aplicación de reproductor de música.
-
¿Por qué necesitas descargar música de la aplicación mp3?
-
Hay muchos beneficios de usar la aplicación de descarga de música mp3, como:
-
-
Puedes ahorrar dinero no pagando por servicios de streaming o comprando canciones individualmente.
-
Puede guardar datos no transmitiendo música en línea.
-
Puedes escuchar música en cualquier momento y en cualquier lugar sin conexión a Internet o wifi.
-
Puede crear sus propias listas de reproducción y personalizar su biblioteca de música.
-
Puedes descubrir nuevas canciones y artistas de diferentes géneros y fuentes.
-
-
¿Cómo elegir la mejor aplicación descargar música mp3?
-
Hay muchos mp3 de música de descarga de aplicaciones disponibles en el mercado, pero no todos son confiables y seguros. Algunos pueden contener malware, virus o anuncios que pueden dañar tu dispositivo o comprometer tu privacidad. Algunos también pueden tener características limitadas, baja calidad o velocidad lenta. Para elegir la mejor aplicación de descarga de música mp3, debe considerar los siguientes factores:
-
-
-
La calidad y velocidad de las descargas.
-
La facilidad de uso y la interfaz de usuario.
-
La compatibilidad y la seguridad de la aplicación.
-
Los comentarios y valoraciones de otros usuarios.
-
-
Top 3 App Descargar música MP3 en 2023
-
Audiomack: Descargador de música
-
Audiomack es una de las aplicaciones de descarga de música mp3 más populares y confiables en 2023. Le permite transmitir y descargar la mejor nueva música de moda de los mejores artistas en categorías como Hip Hop, Rap, R&B, EDM, Afropop y Reggae. También puede escuchar sus archivos MP3 locales y otros archivos desde la aplicación.
-
Características
-
-
Flujo ilimitado de pistas completas y mixtapes que son nuevos o tendencia.
-
Descargar canciones y álbumes completos para escuchar sin conexión, sin datos.
-
Pistas favoritas, álbumes y listas de reproducción y busque, explore y baraje fácilmente su colección de favoritos.
-
Escucha música local como MP3s, AAC, M4A, WAV y otros archivos del reproductor de archivos local.
-
Navegar por listas de reproducción curadas por humor, género y mucho más.
-
Cree listas de reproducción ilimitadas.
-
Sigue a tus artistas, productores y creadores de tendencias favoritos.
-
Pros y contras
-
-
-
Pros
-
Contras
-
-
-
Descargas gratuitas e ilimitadas.
-
Algunas canciones pueden no estar disponibles para su descarga debido a problemas de licencia.
-
-
-
Audio de alta calidad y velocidad rápida.
-
Algunos anuncios pueden interrumpir el proceso de transmisión o descarga.
-
-
-
Fácil de usar y navegar.
-
Algunas características pueden requerir una suscripción premium.
-
-
-
Cómo usarlo
-
-
Descargar e instalar la aplicación desde la Google Play Store o la App Store.
-
Abra la aplicación y regístrese o inicie sesión con su correo electrónico, Facebook o cuenta de Google.
-
Navegar por la página de inicio, tendencias, géneros o listas de reproducción secciones para encontrar la música que desea transmitir o descargar.
-
-
Para acceder a las canciones descargadas, vaya a la sección Mi biblioteca y toque en Música sin conexión.
-
Para escuchar sus archivos de música locales, vaya a la sección Mi biblioteca y toque en Música local.
-
Para crear sus propias listas de reproducción, vaya a la sección Mi biblioteca y toque en Crear lista de reproducción. Puedes añadir canciones de tu música offline, música local o música online.
-
Para seguir a tus artistas, productores o creadores de tendencias favoritos, ve a su página de perfil y toca el botón de seguir. También puedes ver sus últimas subidas, favoritos y listas de reproducción.
-
-
Cualquier convertidor de vídeo libre
-
Cualquier Video Converter Free es otra gran aplicación de descarga de música mp3 en 2023. Es un conversor de vídeo potente y versátil que también puede extraer audio de archivos de vídeo y guardarlos como MP3s. Puedes descargar videos de YouTube, Facebook, Vimeo, Dailymotion y más de 100 sitios más. También puede editar vídeos con recorte, recorte, rotación, adición de efectos, subtítulos y marcas de agua.
-
Características
-
-
Convertir cualquier formato de vídeo a MP4, AVI, MKV, WMV, MOV, FLV, 3GP, WebM, y más.
-
Extraer audio de archivos de vídeo y guardarlos como MP3s con alta calidad.
-
Descargar vídeos en línea de YouTube y otros sitios populares con un solo clic.
-
Editar vídeos con varias herramientas como recorte, recorte, rotación, adición de efectos, subtítulos y marcas de agua.
-
Grabar vídeos en DVD o discos Blu-ray con menús y plantillas personalizadas.
-
Admite conversión por lotes y multihilo para una velocidad y eficiencia más rápidas.
-
Soporta múltiples idiomas y plataformas, incluyendo Windows y Mac OS X.
-
-
Pros y contras
-
-
-
Pros
-
Contras
-
-
-
Descargas y conversiones gratuitas e ilimitadas.
-
Algunas funciones avanzadas pueden requerir una actualización de pago.
-
-
-
Salida de audio y video de alta calidad.
-
-
-
-
Fácil de usar y personalizar.
-
Algunos formatos de video pueden no ser soportados o compatibles con algunos dispositivos.
-
Cómo usarlo
-
-
Descargar e instalar la aplicación desde el sitio web oficial o la tienda de Microsoft.
-
Abra la aplicación y haga clic en el botón Agregar vídeo (s) para importar los archivos de vídeo que desea convertir o extraer audio de.
-
Seleccione el formato de salida de la lista desplegable de la derecha. Para guardar como MP3, elija Archivos de audio > Audio MP3.
-
Para descargar vídeos en línea, haga clic en el botón Descargar vídeo y pegue la URL del vídeo. También puede elegir el formato de salida y la calidad.
-
Para editar vídeos, haga clic en el botón Editar y utilice las herramientas para recortar, recortar, rotar, añadir efectos, subtítulos y marcas de agua.
-
Para grabar vídeos en DVD o discos Blu-ray, haga clic en el botón Grabar DVD y seleccione las opciones y plantillas.
-
Haga clic en el botón Convertir ahora para iniciar el proceso de conversión o extracción. También puede marcar la opción para apagar el equipo cuando se complete la conversión.
-
Para acceder a sus archivos convertidos o descargados, haga clic en el botón Carpeta de salida o vaya a la carpeta que especificó en la configuración.
-
-
Descargador de música
-
Music Downloader es otra aplicación de descarga de música mp3 en 2023. Es una aplicación simple y rápida que te permite descargar música gratis de varios géneros y artistas. También puede reproducir música en línea o sin conexión con su reproductor de música incorporado. También puede administrar sus archivos de música con su administrador de archivos.
-
-
Características
-
-
Descargar música gratis de varios géneros y artistas.
-
Reproducir música en línea o fuera de línea con su reproductor de música incorporado.
-
Administra tus archivos de música con su gestor de archivos.
-
Comparte tu música con tus amigos a través de redes sociales o correo electrónico.
-
Soporta múltiples idiomas y plataformas incluyendo Android e iOS.
-
-
Pros y contras
-
-
-
Pros
-
Contras
-
-
-
Descargas y reproducciones gratuitas e ilimitadas.
-
Alguna música no puede ser licenciada o legal para descargar.
-
-
-
Rápido y fácil de usar.
-
Alguna música puede tener etiquetas de baja calidad o incorrectas.
-
-
-
Interfaz simple y limpia.
-
No hay funciones avanzadas ni opciones de personalización.
-
-
-
Cómo usarlo
-
-
Descargar e instalar la aplicación desde la Google Play Store o la App Store.
-
Abrir la aplicación y navegar por la música por géneros, artistas, o buscar por palabras clave.
-
Para descargar una canción, toque en el icono de descarga junto al título de la canción. También puede previsualizar la canción tocando el icono de reproducción.
-
Para acceder a las canciones descargadas, vaya a la sección Descargado y toque en la canción que desea reproducir. También puede eliminar, renombrar o compartir la canción desde allí.
-
Para reproducir música en línea, vaya a la sección En línea y toque en la canción que desea transmitir. También puede agregarlo a sus favoritos o listas de reproducción desde allí.
-
Para administrar sus archivos de música, vaya a la sección Administrador de archivos y toque en la carpeta que desea abrir. También puede crear nuevas carpetas, mover, copiar o eliminar archivos desde allí.
-
Para compartir tu música con tus amigos, ve a la sección Compartir y selecciona las canciones que quieres compartir. A continuación, puede elegir el método de compartir, como redes sociales o correo electrónico.
-
-
Conclusión
-
-
Preguntas frecuentes
-
-
¿Qué es la descarga de aplicaciones de música mp3?
-
Una aplicación de descarga de música mp3 es un tipo de software que le permite descargar archivos de música desde plataformas en línea como YouTube, SoundCloud, Spotify, y más. Los archivos descargados suelen estar en formato MP3, que es un formato de audio común y ampliamente soportado. A continuación, puede transferir los archivos al almacenamiento de su dispositivo o tarjeta SD y reproducirlos sin conexión utilizando cualquier aplicación de reproductor de música.
¿Por qué necesito música de descarga de aplicaciones mp3?
-
Necesitas descargar música de la aplicación mp3 porque puede ayudarte a disfrutar de música gratis sin conexión en tu dispositivo. Puede ahorrarle dinero al no pagar por servicios de transmisión o comprar canciones individualmente. Puede ahorrarle datos al no transmitir música en línea. También puede permitirle escuchar música en cualquier momento y en cualquier lugar sin conexión a Internet o wifi. También puede crear sus propias listas de reproducción y personalizar su biblioteca de música. También puedes descubrir nuevas canciones y artistas de diferentes géneros y fuentes.
-
¿Cómo puedo elegir la mejor aplicación de descarga de música mp3?
-
Para elegir la mejor aplicación de descarga de música mp3, debe tener en cuenta los siguientes factores: el número y la variedad de fuentes que soporta, la calidad y velocidad de las descargas, la facilidad de uso y la interfaz de usuario, la compatibilidad y la seguridad de la aplicación, y las reseñas y valoraciones de otros usuarios. También debes comparar las características, pros y contras de diferentes aplicaciones y probarlas antes de tomar una decisión final.
-
¿Cuáles son las 3 mejores aplicaciones de descarga de música mp3 en 2023?
-
Las 3 mejores aplicaciones de descarga de música mp3 en 2023 son Audiomack, Any Video Converter Free y Music Downloader. Estas aplicaciones han sido revisadas y calificadas altamente por muchos usuarios y expertos. Ofrecen una amplia gama de características, fuentes, calidad y velocidad para descargar y reproducir música sin conexión. También son fáciles de usar, compatibles y seguras.
-
¿Cómo uso la descarga de aplicaciones música mp3?
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BernardoOlisan/vqganclip/CLIP/clip/__init__.py b/spaces/BernardoOlisan/vqganclip/CLIP/clip/__init__.py
deleted file mode 100644
index dcc5619538c0f7c782508bdbd9587259d805e0d9..0000000000000000000000000000000000000000
--- a/spaces/BernardoOlisan/vqganclip/CLIP/clip/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .clip import *
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/regexopt.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/regexopt.py
deleted file mode 100644
index ae0079199b9b026f327aaaa729411f1a43c6cb60..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/regexopt.py
+++ /dev/null
@@ -1,91 +0,0 @@
-"""
- pygments.regexopt
- ~~~~~~~~~~~~~~~~~
-
- An algorithm that generates optimized regexes for matching long lists of
- literal strings.
-
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-import re
-from re import escape
-from os.path import commonprefix
-from itertools import groupby
-from operator import itemgetter
-
-CS_ESCAPE = re.compile(r'[\[\^\\\-\]]')
-FIRST_ELEMENT = itemgetter(0)
-
-
-def make_charset(letters):
- return '[' + CS_ESCAPE.sub(lambda m: '\\' + m.group(), ''.join(letters)) + ']'
-
-
-def regex_opt_inner(strings, open_paren):
- """Return a regex that matches any string in the sorted list of strings."""
- close_paren = open_paren and ')' or ''
- # print strings, repr(open_paren)
- if not strings:
- # print '-> nothing left'
- return ''
- first = strings[0]
- if len(strings) == 1:
- # print '-> only 1 string'
- return open_paren + escape(first) + close_paren
- if not first:
- # print '-> first string empty'
- return open_paren + regex_opt_inner(strings[1:], '(?:') \
- + '?' + close_paren
- if len(first) == 1:
- # multiple one-char strings? make a charset
- oneletter = []
- rest = []
- for s in strings:
- if len(s) == 1:
- oneletter.append(s)
- else:
- rest.append(s)
- if len(oneletter) > 1: # do we have more than one oneletter string?
- if rest:
- # print '-> 1-character + rest'
- return open_paren + regex_opt_inner(rest, '') + '|' \
- + make_charset(oneletter) + close_paren
- # print '-> only 1-character'
- return open_paren + make_charset(oneletter) + close_paren
- prefix = commonprefix(strings)
- if prefix:
- plen = len(prefix)
- # we have a prefix for all strings
- # print '-> prefix:', prefix
- return open_paren + escape(prefix) \
- + regex_opt_inner([s[plen:] for s in strings], '(?:') \
- + close_paren
- # is there a suffix?
- strings_rev = [s[::-1] for s in strings]
- suffix = commonprefix(strings_rev)
- if suffix:
- slen = len(suffix)
- # print '-> suffix:', suffix[::-1]
- return open_paren \
- + regex_opt_inner(sorted(s[:-slen] for s in strings), '(?:') \
- + escape(suffix[::-1]) + close_paren
- # recurse on common 1-string prefixes
- # print '-> last resort'
- return open_paren + \
- '|'.join(regex_opt_inner(list(group[1]), '')
- for group in groupby(strings, lambda s: s[0] == first[0])) \
- + close_paren
-
-
-def regex_opt(strings, prefix='', suffix=''):
- """Return a compiled regex that matches any string in the given list.
-
- The strings to match must be literal strings, not regexes. They will be
- regex-escaped.
-
- *prefix* and *suffix* are pre- and appended to the final regex.
- """
- strings = sorted(strings)
- return prefix + regex_opt_inner(strings, '(') + suffix
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/filepost.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/filepost.py
deleted file mode 100644
index 36c9252c647e67bc7353c523152568b993c1331f..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/filepost.py
+++ /dev/null
@@ -1,98 +0,0 @@
-from __future__ import absolute_import
-
-import binascii
-import codecs
-import os
-from io import BytesIO
-
-from .fields import RequestField
-from .packages import six
-from .packages.six import b
-
-writer = codecs.lookup("utf-8")[3]
-
-
-def choose_boundary():
- """
- Our embarrassingly-simple replacement for mimetools.choose_boundary.
- """
- boundary = binascii.hexlify(os.urandom(16))
- if not six.PY2:
- boundary = boundary.decode("ascii")
- return boundary
-
-
-def iter_field_objects(fields):
- """
- Iterate over fields.
-
- Supports list of (k, v) tuples and dicts, and lists of
- :class:`~urllib3.fields.RequestField`.
-
- """
- if isinstance(fields, dict):
- i = six.iteritems(fields)
- else:
- i = iter(fields)
-
- for field in i:
- if isinstance(field, RequestField):
- yield field
- else:
- yield RequestField.from_tuples(*field)
-
-
-def iter_fields(fields):
- """
- .. deprecated:: 1.6
-
- Iterate over fields.
-
- The addition of :class:`~urllib3.fields.RequestField` makes this function
- obsolete. Instead, use :func:`iter_field_objects`, which returns
- :class:`~urllib3.fields.RequestField` objects.
-
- Supports list of (k, v) tuples and dicts.
- """
- if isinstance(fields, dict):
- return ((k, v) for k, v in six.iteritems(fields))
-
- return ((k, v) for k, v in fields)
-
-
-def encode_multipart_formdata(fields, boundary=None):
- """
- Encode a dictionary of ``fields`` using the multipart/form-data MIME format.
-
- :param fields:
- Dictionary of fields or list of (key, :class:`~urllib3.fields.RequestField`).
-
- :param boundary:
- If not specified, then a random boundary will be generated using
- :func:`urllib3.filepost.choose_boundary`.
- """
- body = BytesIO()
- if boundary is None:
- boundary = choose_boundary()
-
- for field in iter_field_objects(fields):
- body.write(b("--%s\r\n" % (boundary)))
-
- writer(body).write(field.render_headers())
- data = field.data
-
- if isinstance(data, int):
- data = str(data) # Backwards compatibility
-
- if isinstance(data, six.text_type):
- writer(body).write(data)
- else:
- body.write(data)
-
- body.write(b"\r\n")
-
- body.write(b("--%s--\r\n" % (boundary)))
-
- content_type = str("multipart/form-data; boundary=%s" % boundary)
-
- return body.getvalue(), content_type
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/tomli/_re.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/tomli/_re.py
deleted file mode 100644
index 994bb7493fd92865e6ab87c277ba5741b44c31a9..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/tomli/_re.py
+++ /dev/null
@@ -1,107 +0,0 @@
-# SPDX-License-Identifier: MIT
-# SPDX-FileCopyrightText: 2021 Taneli Hukkinen
-# Licensed to PSF under a Contributor Agreement.
-
-from __future__ import annotations
-
-from datetime import date, datetime, time, timedelta, timezone, tzinfo
-from functools import lru_cache
-import re
-from typing import Any
-
-from ._types import ParseFloat
-
-# E.g.
-# - 00:32:00.999999
-# - 00:32:00
-_TIME_RE_STR = r"([01][0-9]|2[0-3]):([0-5][0-9]):([0-5][0-9])(?:\.([0-9]{1,6})[0-9]*)?"
-
-RE_NUMBER = re.compile(
- r"""
-0
-(?:
- x[0-9A-Fa-f](?:_?[0-9A-Fa-f])* # hex
- |
- b[01](?:_?[01])* # bin
- |
- o[0-7](?:_?[0-7])* # oct
-)
-|
-[+-]?(?:0|[1-9](?:_?[0-9])*) # dec, integer part
-(?P
- (?:\.[0-9](?:_?[0-9])*)? # optional fractional part
- (?:[eE][+-]?[0-9](?:_?[0-9])*)? # optional exponent part
-)
-""",
- flags=re.VERBOSE,
-)
-RE_LOCALTIME = re.compile(_TIME_RE_STR)
-RE_DATETIME = re.compile(
- rf"""
-([0-9]{{4}})-(0[1-9]|1[0-2])-(0[1-9]|[12][0-9]|3[01]) # date, e.g. 1988-10-27
-(?:
- [Tt ]
- {_TIME_RE_STR}
- (?:([Zz])|([+-])([01][0-9]|2[0-3]):([0-5][0-9]))? # optional time offset
-)?
-""",
- flags=re.VERBOSE,
-)
-
-
-def match_to_datetime(match: re.Match) -> datetime | date:
- """Convert a `RE_DATETIME` match to `datetime.datetime` or `datetime.date`.
-
- Raises ValueError if the match does not correspond to a valid date
- or datetime.
- """
- (
- year_str,
- month_str,
- day_str,
- hour_str,
- minute_str,
- sec_str,
- micros_str,
- zulu_time,
- offset_sign_str,
- offset_hour_str,
- offset_minute_str,
- ) = match.groups()
- year, month, day = int(year_str), int(month_str), int(day_str)
- if hour_str is None:
- return date(year, month, day)
- hour, minute, sec = int(hour_str), int(minute_str), int(sec_str)
- micros = int(micros_str.ljust(6, "0")) if micros_str else 0
- if offset_sign_str:
- tz: tzinfo | None = cached_tz(
- offset_hour_str, offset_minute_str, offset_sign_str
- )
- elif zulu_time:
- tz = timezone.utc
- else: # local date-time
- tz = None
- return datetime(year, month, day, hour, minute, sec, micros, tzinfo=tz)
-
-
-@lru_cache(maxsize=None)
-def cached_tz(hour_str: str, minute_str: str, sign_str: str) -> timezone:
- sign = 1 if sign_str == "+" else -1
- return timezone(
- timedelta(
- hours=sign * int(hour_str),
- minutes=sign * int(minute_str),
- )
- )
-
-
-def match_to_localtime(match: re.Match) -> time:
- hour_str, minute_str, sec_str, micros_str = match.groups()
- micros = int(micros_str.ljust(6, "0")) if micros_str else 0
- return time(int(hour_str), int(minute_str), int(sec_str), micros)
-
-
-def match_to_number(match: re.Match, parse_float: ParseFloat) -> Any:
- if match.group("floatpart"):
- return parse_float(match.group())
- return int(match.group(), 0)
diff --git a/spaces/CVPR/LIVE/thrust/testing/unittest/util.h b/spaces/CVPR/LIVE/thrust/testing/unittest/util.h
deleted file mode 100644
index 02c1eb7ce3d3eeaed860daaf628aa13c7816c772..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/testing/unittest/util.h
+++ /dev/null
@@ -1,67 +0,0 @@
-#pragma once
-
-#include
-#include
-#include
-#include
-
-#include
-#include
-#include
-
-namespace unittest
-{
-
-template
- std::string type_name(void)
-{
- return demangle(typeid(T).name());
-} // end type_name()
-
-// Use this with counting_iterator to avoid generating a range larger than we
-// can represent.
-template
-typename thrust::detail::disable_if<
- thrust::detail::is_floating_point::value
-, T
->::type truncate_to_max_representable(std::size_t n)
-{
- return thrust::min(
- n, static_cast(thrust::numeric_limits::max())
- );
-}
-
-// TODO: This probably won't work for `half`.
-template
-typename thrust::detail::enable_if<
- thrust::detail::is_floating_point::value
-, T
->::type truncate_to_max_representable(std::size_t n)
-{
- return thrust::min(
- n, thrust::numeric_limits::max()
- );
-}
-
-} // end unittest
-
-template
-void PRINT(Iterator first, Iterator last)
-{
- size_t n = 0;
- for (Iterator i = first; i != last; i++, n++)
- std::cout << ">>> [" << n << "] = " << *i << std::endl;
-}
-
-template
-void PRINT(const Container& c)
-{
- PRINT(c.begin(), c.end());
-}
-
-template
-void PRINT(const char (&c)[N])
-{
- std::cout << std::string(c, c + N) << std::endl;
-}
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/core/triple_chevron_launch.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/core/triple_chevron_launch.h
deleted file mode 100644
index deeffac9dae8bc567face7cf7f8483d41454bbab..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/core/triple_chevron_launch.h
+++ /dev/null
@@ -1,976 +0,0 @@
-/******************************************************************************
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * * Neither the name of the NVIDIA CORPORATION nor the
- * names of its contributors may be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- ******************************************************************************/
-#pragma once
-
-#include
-#include
-#include
-#include
-
-
-namespace thrust
-{
-
-namespace cuda_cub {
-namespace launcher {
-
- struct triple_chevron
- {
- typedef size_t Size;
- dim3 const grid;
- dim3 const block;
- Size const shared_mem;
- cudaStream_t const stream;
-
- THRUST_RUNTIME_FUNCTION
- triple_chevron(dim3 grid_,
- dim3 block_,
- Size shared_mem_ = 0,
- cudaStream_t stream_ = 0)
- : grid(grid_),
- block(block_),
- shared_mem(shared_mem_),
- stream(stream_) {}
-
-#if 0
- template
- cudaError_t __host__
- doit_host(K k, Args const&... args) const
- {
- k<<>>(args...);
- return cudaPeekAtLastError();
- }
-#else
- template
- cudaError_t __host__
- doit_host(K k, _0 x0) const
- {
- k<<>>(x0);
- return cudaPeekAtLastError();
- }
- template
- cudaError_t __host__
- doit_host(K k, _0 x0, _1 x1) const
- {
- k<<>>(x0,x1);
- return cudaPeekAtLastError();
- }
- template
- cudaError_t __host__
- doit_host(K k, _0 x0, _1 x1, _2 x2) const
- {
- k<<>>(x0,x1,x2);
- return cudaPeekAtLastError();
- }
- template
- cudaError_t __host__
- doit_host(K k, _0 x0, _1 x1, _2 x2, _3 x3) const
- {
- k<<>>(x0,x1,x2,x3);
- return cudaPeekAtLastError();
- }
- template
- cudaError_t __host__
- doit_host(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4) const
- {
- k<<>>(x0,x1,x2,x3,x4);
- return cudaPeekAtLastError();
- }
- template
- cudaError_t __host__
- doit_host(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5) const
- {
- k<<>>(x0,x1,x2,x3,x4,x5);
- return cudaPeekAtLastError();
- }
- template
- cudaError_t __host__
- doit_host(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6) const
- {
- k<<>>(x0,x1,x2,x3,x4,x5,x6);
- return cudaPeekAtLastError();
- }
- template
- cudaError_t __host__
- doit_host(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7) const
- {
- k<<>>(x0,x1,x2,x3,x4,x5,x6,x7);
- return cudaPeekAtLastError();
- }
- template
- cudaError_t __host__
- doit_host(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8) const
- {
- k<<>>(x0,x1,x2,x3,x4,x5,x6,x7,x8);
- return cudaPeekAtLastError();
- }
- template
- cudaError_t __host__
- doit_host(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9) const
- {
- k<<>>(x0,x1,x2,x3,x4,x5,x6,x7,x8,x9);
- return cudaPeekAtLastError();
- }
- template
- cudaError_t __host__
- doit_host(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA) const
- {
- k<<>>(x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA);
- return cudaPeekAtLastError();
- }
- template
- cudaError_t __host__
- doit_host(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB) const
- {
- k<<>>(x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA,xB);
- return cudaPeekAtLastError();
- }
- template
- cudaError_t __host__
- doit_host(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC) const
- {
- k<<>>(x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA,xB,xC);
- return cudaPeekAtLastError();
- }
- template
- cudaError_t __host__
- doit_host(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC, _xD xD) const
- {
- k<<>>(x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA,xB,xC,xD);
- return cudaPeekAtLastError();
- }
- template
- cudaError_t __host__
- doit_host(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC, _xD xD, _xE xE) const
- {
- k<<>>(x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA,xB,xC,xD,xE);
- return cudaPeekAtLastError();
- }
- template
- cudaError_t __host__
- doit_host(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC, _xD xD, _xE xE, _xF xF) const
- {
- k<<>>(x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA,xB,xC,xD,xE,xF);
- return cudaPeekAtLastError();
- }
-#endif
-
- template
- size_t __device__
- align_up(size_t offset) const
- {
- size_t alignment = alignment_of::value;
- return alignment * ((offset + (alignment - 1))/ alignment);
- }
-
-#if 0
- size_t __device__ argument_pack_size(size_t size) const { return size; }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg const& arg, Args const&... args) const
- {
- size = align_up(size);
- return argument_pack_size(size + sizeof(Arg), args...);
- }
-#else
- template
- size_t __device__
- argument_pack_size(size_t size, Arg) const
- {
- return align_up(size) + sizeof(Arg);
- }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg, _0 x0) const
- {
- return argument_pack_size(align_up(size) + sizeof(Arg), x0);
- }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg, _0 x0, _1 x1) const
- {
- return argument_pack_size(align_up(size) + sizeof(Arg), x0, x1);
- }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg, _0 x0, _1 x1, _2 x2) const
- {
- return argument_pack_size(align_up(size) + sizeof(Arg), x0, x1, x2);
- }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg, _0 x0, _1 x1, _2 x2, _3 x3) const
- {
- return argument_pack_size(align_up(size) + sizeof(Arg), x0, x1, x2, x3);
- }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4) const
- {
- return argument_pack_size(align_up(size) + sizeof(Arg), x0, x1, x2, x3, x4);
- }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5) const
- {
- return argument_pack_size(align_up(size) + sizeof(Arg), x0, x1, x2, x3, x4, x5);
- }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6) const
- {
- return argument_pack_size(align_up(size) + sizeof(Arg), x0, x1, x2, x3, x4, x5, x6);
- }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7) const
- {
- return argument_pack_size(align_up(size) + sizeof(Arg), x0, x1, x2, x3, x4, x5, x6, x7);
- }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8) const
- {
- return argument_pack_size(align_up(size) + sizeof(Arg), x0, x1, x2, x3, x4, x5, x6, x7, x8);
- }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9) const
- {
- return argument_pack_size(align_up(size) + sizeof(Arg), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9);
- }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA) const
- {
- return argument_pack_size(align_up(size) + sizeof(Arg), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA);
- }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB) const
- {
- return argument_pack_size(align_up(size) + sizeof(Arg), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB);
- }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC) const
- {
- return argument_pack_size(align_up(size) + sizeof(Arg), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC);
- }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC,_xD xD) const
- {
- return argument_pack_size(align_up(size) + sizeof(Arg), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD);
- }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC,_xD xD, _xE xE) const
- {
- return argument_pack_size(align_up(size) + sizeof(Arg), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD, xE);
- }
- template
- size_t __device__
- argument_pack_size(size_t size, Arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC,_xD xD, _xE xE, _xF xF) const
- {
- return argument_pack_size(align_up(size) + sizeof(Arg), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD, xE, xF);
- }
-#endif /* variadic */
-
- template
- size_t __device__ copy_arg(char* buffer, size_t offset, Arg arg) const
- {
- offset = align_up(offset);
- for (int i = 0; i != sizeof(Arg); ++i)
- buffer[offset+i] = *((char*)&arg + i);
- return offset + sizeof(Arg);
- }
-
-#if 0
- void __device__ fill_arguments(char*, size_t) const {}
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg const& arg, Args const& ... args) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), args...);
- }
-#else
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg) const
- {
- copy_arg(buffer, offset, arg);
- }
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg, _0 x0) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), x0);
- }
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg, _0 x0, _1 x1) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), x0, x1);
- }
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg, _0 x0, _1 x1, _2 x2) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), x0, x1, x2);
- }
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg, _0 x0, _1 x1, _2 x2, _3 x3) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), x0, x1, x2, x3);
- }
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), x0, x1, x2, x3, x4);
- }
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), x0, x1, x2, x3, x4, x5);
- }
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), x0, x1, x2, x3, x4, x5, x6);
- }
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), x0, x1, x2, x3, x4, x5, x6, x7);
- }
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), x0, x1, x2, x3, x4, x5, x6, x7, x8);
- }
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9);
- }
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA);
- }
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB);
- }
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC);
- }
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC,_xD xD) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD);
- }
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC,_xD xD, _xE xE) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD, xE);
- }
- template
- void __device__
- fill_arguments(char* buffer, size_t offset, Arg arg, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC,_xD xD, _xE xE, _xF xF) const
- {
- fill_arguments(buffer, copy_arg(buffer, offset, arg), x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD, xE, xF);
- }
-#endif /* variadic */
-
-#if 0
- template
- cudaError_t __device__
- doit_device(K k, Args const&... args) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,args...);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, args...);
- status = launch_device(k, param_buffer);
-#endif
- return status;
- }
-#else
- template
- cudaError_t __device__
- doit_device(K k, _0 x0) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,x0);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, x0);
- status = launch_device(k, param_buffer);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(x0);
-#endif
- return status;
- }
- template
- cudaError_t __device__
- doit_device(K k, _0 x0, _1 x1) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,x0,x1);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, x0,x1);
- status = launch_device(k, param_buffer);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(x0);
- THRUST_UNUSED_VAR(x1);
-#endif
- return status;
- }
- template
- cudaError_t __device__
- doit_device(K k, _0 x0, _1 x1, _2 x2) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,x0,x1,x2);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, x0,x1,x2);
- status = launch_device(k, param_buffer);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(x0);
- THRUST_UNUSED_VAR(x1);
- THRUST_UNUSED_VAR(x2);
-#endif
- return status;
- }
- template
- cudaError_t __device__
- doit_device(K k, _0 x0, _1 x1, _2 x2, _3 x3) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,x0,x1,x2,x3);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, x0,x1,x2,x3);
- status = launch_device(k, param_buffer);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(x0);
- THRUST_UNUSED_VAR(x1);
- THRUST_UNUSED_VAR(x2);
- THRUST_UNUSED_VAR(x3);
-#endif
- return status;
- }
- template
- cudaError_t __device__
- doit_device(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,x0,x1,x2,x3,x4);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, x0,x1,x2,x3,x4);
- status = launch_device(k, param_buffer);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(x0);
- THRUST_UNUSED_VAR(x1);
- THRUST_UNUSED_VAR(x2);
- THRUST_UNUSED_VAR(x3);
- THRUST_UNUSED_VAR(x4);
-#endif
- return status;
- }
- template
- cudaError_t __device__
- doit_device(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,x0,x1,x2,x3,x4,x5);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, x0,x1,x2,x3,x4,x5);
- status = launch_device(k, param_buffer);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(x0);
- THRUST_UNUSED_VAR(x1);
- THRUST_UNUSED_VAR(x2);
- THRUST_UNUSED_VAR(x3);
- THRUST_UNUSED_VAR(x4);
- THRUST_UNUSED_VAR(x5);
-#endif
- return status;
- }
- template
- cudaError_t __device__
- doit_device(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,x0,x1,x2,x3,x4,x5,x6);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, x0,x1,x2,x3,x4,x5,x6);
- status = launch_device(k, param_buffer);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(x0);
- THRUST_UNUSED_VAR(x1);
- THRUST_UNUSED_VAR(x2);
- THRUST_UNUSED_VAR(x3);
- THRUST_UNUSED_VAR(x4);
- THRUST_UNUSED_VAR(x5);
- THRUST_UNUSED_VAR(x6);
-#endif
- return status;
- }
- template
- cudaError_t __device__
- doit_device(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,x0,x1,x2,x3,x4,x5,x6,x7);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, x0,x1,x2,x3,x4,x5,x6,x7);
- status = launch_device(k, param_buffer);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(x0);
- THRUST_UNUSED_VAR(x1);
- THRUST_UNUSED_VAR(x2);
- THRUST_UNUSED_VAR(x3);
- THRUST_UNUSED_VAR(x4);
- THRUST_UNUSED_VAR(x5);
- THRUST_UNUSED_VAR(x6);
- THRUST_UNUSED_VAR(x7);
-#endif
- return status;
- }
- template
- cudaError_t __device__
- doit_device(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,x0,x1,x2,x3,x4,x5,x6,x7,x8);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, x0,x1,x2,x3,x4,x5,x6,x7,x8);
- status = launch_device(k, param_buffer);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(x0);
- THRUST_UNUSED_VAR(x1);
- THRUST_UNUSED_VAR(x2);
- THRUST_UNUSED_VAR(x3);
- THRUST_UNUSED_VAR(x4);
- THRUST_UNUSED_VAR(x5);
- THRUST_UNUSED_VAR(x6);
- THRUST_UNUSED_VAR(x7);
- THRUST_UNUSED_VAR(x8);
-#endif
- return status;
- }
- template
- cudaError_t __device__
- doit_device(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,x0,x1,x2,x3,x4,x5,x6,x7,x8,x9);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, x0,x1,x2,x3,x4,x5,x6,x7,x8,x9);
- status = launch_device(k, param_buffer);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(x0);
- THRUST_UNUSED_VAR(x1);
- THRUST_UNUSED_VAR(x2);
- THRUST_UNUSED_VAR(x3);
- THRUST_UNUSED_VAR(x4);
- THRUST_UNUSED_VAR(x5);
- THRUST_UNUSED_VAR(x6);
- THRUST_UNUSED_VAR(x7);
- THRUST_UNUSED_VAR(x8);
- THRUST_UNUSED_VAR(x9);
-#endif
- return status;
- }
- template
- cudaError_t __device__
- doit_device(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA);
- status = launch_device(k, param_buffer);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(x0);
- THRUST_UNUSED_VAR(x1);
- THRUST_UNUSED_VAR(x2);
- THRUST_UNUSED_VAR(x3);
- THRUST_UNUSED_VAR(x4);
- THRUST_UNUSED_VAR(x5);
- THRUST_UNUSED_VAR(x6);
- THRUST_UNUSED_VAR(x7);
- THRUST_UNUSED_VAR(x8);
- THRUST_UNUSED_VAR(x9);
- THRUST_UNUSED_VAR(xA);
-#endif
- return status;
- }
- template
- cudaError_t __device__
- doit_device(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA,xB);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA,xB);
- status = launch_device(k, param_buffer);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(x0);
- THRUST_UNUSED_VAR(x1);
- THRUST_UNUSED_VAR(x2);
- THRUST_UNUSED_VAR(x3);
- THRUST_UNUSED_VAR(x4);
- THRUST_UNUSED_VAR(x5);
- THRUST_UNUSED_VAR(x6);
- THRUST_UNUSED_VAR(x7);
- THRUST_UNUSED_VAR(x8);
- THRUST_UNUSED_VAR(x9);
- THRUST_UNUSED_VAR(xA);
- THRUST_UNUSED_VAR(xB);
-#endif
- return status;
- }
- template
- cudaError_t __device__
- doit_device(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA,xB,xC);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA,xB,xC);
- status = launch_device(k, param_buffer);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(x0);
- THRUST_UNUSED_VAR(x1);
- THRUST_UNUSED_VAR(x2);
- THRUST_UNUSED_VAR(x3);
- THRUST_UNUSED_VAR(x4);
- THRUST_UNUSED_VAR(x5);
- THRUST_UNUSED_VAR(x6);
- THRUST_UNUSED_VAR(x7);
- THRUST_UNUSED_VAR(x8);
- THRUST_UNUSED_VAR(x9);
- THRUST_UNUSED_VAR(xA);
- THRUST_UNUSED_VAR(xB);
- THRUST_UNUSED_VAR(xC);
-#endif
- return status;
- }
- template
- cudaError_t __device__
- doit_device(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC,_xD xD) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA,xB,xC,xD);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA,xB,xC,xD);
- status = launch_device(k, param_buffer);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(x0);
- THRUST_UNUSED_VAR(x1);
- THRUST_UNUSED_VAR(x2);
- THRUST_UNUSED_VAR(x3);
- THRUST_UNUSED_VAR(x4);
- THRUST_UNUSED_VAR(x5);
- THRUST_UNUSED_VAR(x6);
- THRUST_UNUSED_VAR(x7);
- THRUST_UNUSED_VAR(x8);
- THRUST_UNUSED_VAR(x9);
- THRUST_UNUSED_VAR(xA);
- THRUST_UNUSED_VAR(xB);
- THRUST_UNUSED_VAR(xC);
- THRUST_UNUSED_VAR(xD);
-#endif
- return status;
- }
- template
- cudaError_t __device__
- doit_device(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC,_xD xD, _xE xE) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA,xB,xC,xD,xE);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA,xB,xC,xD,xE);
- status = launch_device(k, param_buffer);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(x0);
- THRUST_UNUSED_VAR(x1);
- THRUST_UNUSED_VAR(x2);
- THRUST_UNUSED_VAR(x3);
- THRUST_UNUSED_VAR(x4);
- THRUST_UNUSED_VAR(x5);
- THRUST_UNUSED_VAR(x6);
- THRUST_UNUSED_VAR(x7);
- THRUST_UNUSED_VAR(x8);
- THRUST_UNUSED_VAR(x9);
- THRUST_UNUSED_VAR(xA);
- THRUST_UNUSED_VAR(xB);
- THRUST_UNUSED_VAR(xC);
- THRUST_UNUSED_VAR(xD);
- THRUST_UNUSED_VAR(xE);
-#endif
- return status;
- }
- template
- cudaError_t __device__
- doit_device(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC,_xD xD, _xE xE, _xF xF) const
- {
- cudaError_t status = cudaErrorNotSupported;
-#if __THRUST_HAS_CUDART__
- const size_t size = argument_pack_size(0,x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA,xB,xC,xD,xE,xF);
- void *param_buffer = cudaGetParameterBuffer(64,size);
- fill_arguments((char*)param_buffer, 0, x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,xA,xB,xC,xD,xE,xF);
- status = launch_device(k, param_buffer);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(x0);
- THRUST_UNUSED_VAR(x1);
- THRUST_UNUSED_VAR(x2);
- THRUST_UNUSED_VAR(x3);
- THRUST_UNUSED_VAR(x4);
- THRUST_UNUSED_VAR(x5);
- THRUST_UNUSED_VAR(x6);
- THRUST_UNUSED_VAR(x7);
- THRUST_UNUSED_VAR(x8);
- THRUST_UNUSED_VAR(x9);
- THRUST_UNUSED_VAR(xA);
- THRUST_UNUSED_VAR(xB);
- THRUST_UNUSED_VAR(xC);
- THRUST_UNUSED_VAR(xD);
- THRUST_UNUSED_VAR(xE);
- THRUST_UNUSED_VAR(xF);
-#endif
- return status;
- }
-#endif /* variadic */
-
- template
- cudaError_t __device__
- launch_device(K k, void* buffer) const
- {
-#if __THRUST_HAS_CUDART__
- return cudaLaunchDevice((void*)k,
- buffer,
- dim3(grid),
- dim3(block),
- shared_mem,
- stream);
-#else
- THRUST_UNUSED_VAR(k);
- THRUST_UNUSED_VAR(buffer);
- return cudaErrorNotSupported;
-#endif
- }
-
-
-#if defined(__NVCOMPILER_CUDA__)
-# define THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(...) \
- (__builtin_is_device_code() ? \
- doit_device(__VA_ARGS__) : doit_host(__VA_ARGS__))
-#elif defined(__CUDA_ARCH__)
-# define THRUST_TRIPLE_LAUNCHER_HOSTDEVICE doit_device
-#else
-# define THRUST_TRIPLE_LAUNCHER_HOSTDEVICE doit_host
-#endif
-
-#if 0
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, Args const&... args) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, args...);
- }
-#else
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, _0 x0) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, x0);
- }
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, _0 x0, _1 x1) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, x0, x1);
- }
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, _0 x0, _1 x1, _2 x2) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, x0, x1, x2);
- }
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, _0 x0, _1 x1, _2 x2, _3 x3) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, x0, x1, x2, x3);
- }
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, x0, x1, x2, x3, x4);
- }
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, x0, x1, x2, x3, x4, x5);
- }
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, x0, x1, x2, x3, x4, x5, x6);
- }
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, x0, x1, x2, x3, x4, x5, x6, x7);
- }
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, x0, x1, x2, x3, x4, x5, x6, x7, x8);
- }
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9);
- }
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA);
- }
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB);
- }
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC);
- }
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC, _xD xD) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD);
- }
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC, _xD xD, _xE xE) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD, xE);
- }
- __thrust_exec_check_disable__
- template
- cudaError_t THRUST_FUNCTION
- doit(K k, _0 x0, _1 x1, _2 x2, _3 x3, _4 x4, _5 x5, _6 x6, _7 x7, _8 x8, _9 x9, _xA xA, _xB xB, _xC xC, _xD xD, _xE xE, _xF xF) const
- {
- return THRUST_TRIPLE_LAUNCHER_HOSTDEVICE(k, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, xA, xB, xC, xD, xE, xF);
- }
-#endif
-#undef THRUST_TRIPLE_LAUNCHER_HOSTDEVICE
- }; // struct triple_chevron
-
-} // namespace launcher
-} // namespace cuda_
-
-} // end namespace thrust
diff --git a/spaces/ChallengeHub/Chinese-LangChain/assets/custom.css b/spaces/ChallengeHub/Chinese-LangChain/assets/custom.css
deleted file mode 100644
index 9c18b7c6b5d4a99b5be7299273b8df365bde1289..0000000000000000000000000000000000000000
--- a/spaces/ChallengeHub/Chinese-LangChain/assets/custom.css
+++ /dev/null
@@ -1,190 +0,0 @@
-:root {
- --chatbot-color-light: rgba(255, 255, 255, 0.08);
- --chatbot-color-dark: #121111;
-}
-
-/* status_display */
-#status_display {
- display: flex;
- min-height: 2.5em;
- align-items: flex-end;
- justify-content: flex-end;
-}
-#status_display p {
- font-size: .85em;
- font-family: monospace;
- color: var(--body-text-color-subdued);
-}
-
-
-
-/* usage_display */
-#usage_display {
- height: 1em;
-}
-#usage_display p{
- padding: 0 1em;
- font-size: .85em;
- font-family: monospace;
- color: var(--body-text-color-subdued);
-}
-/* list */
-ol:not(.options), ul:not(.options) {
- padding-inline-start: 2em !important;
-}
-
-/* Thank @Keldos-Li for fixing it */
-/* Light mode (default) */
-#chuanhu_chatbot {
- background-color: var(--chatbot-color-light) !important;
- color: #000000 !important;
-}
-[data-testid = "bot"] {
- background-color: rgba(255, 255, 255, 0.08) !important;
-}
-[data-testid = "user"] {
- background-color: #95EC69 !important;
-}
-
-/* Dark mode */
-.dark #chuanhu_chatbot {
- background-color: var(--chatbot-color-dark) !important;
- color: rgba(255, 255, 255, 0.08) !important;
-}
-.dark [data-testid = "bot"] {
- background-color: #2C2C2C !important;
-}
-.dark [data-testid = "user"] {
- background-color: #26B561 !important;
-}
-
-#chuanhu_chatbot {
- height: 100%;
- min-height: 400px;
-}
-
-[class *= "message"] {
- border-radius: var(--radius-xl) !important;
- border: none;
- padding: var(--spacing-xl) !important;
- font-size: var(--text-md) !important;
- line-height: var(--line-md) !important;
- min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
- min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
-}
-[data-testid = "bot"] {
- max-width: 85%;
- border-bottom-left-radius: 0 !important;
-}
-[data-testid = "user"] {
- max-width: 85%;
- width: auto !important;
- border-bottom-right-radius: 0 !important;
-}
-/* Table */
-table {
- margin: 1em 0;
- border-collapse: collapse;
- empty-cells: show;
-}
-td,th {
- border: 1.2px solid var(--border-color-primary) !important;
- padding: 0.2em;
-}
-thead {
- background-color: rgba(175,184,193,0.2);
-}
-thead th {
- padding: .5em .2em;
-}
-/* Inline code */
-code {
- display: inline;
- white-space: break-spaces;
- border-radius: 6px;
- margin: 0 2px 0 2px;
- padding: .2em .4em .1em .4em;
- background-color: rgba(175,184,193,0.2);
-}
-/* Code block */
-pre code {
- display: block;
- overflow: auto;
- white-space: pre;
- background-color: hsla(0, 0%, 0%, 80%)!important;
- border-radius: 10px;
- padding: 1.4em 1.2em 0em 1.4em;
- margin: 1.2em 2em 1.2em 0.5em;
- color: #FFF;
- box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2);
-}
-/* Hightlight */
-.highlight .hll { background-color: #49483e }
-.highlight .c { color: #75715e } /* Comment */
-.highlight .err { color: #960050; background-color: #1e0010 } /* Error */
-.highlight .k { color: #66d9ef } /* Keyword */
-.highlight .l { color: #ae81ff } /* Literal */
-.highlight .n { color: #f8f8f2 } /* Name */
-.highlight .o { color: #f92672 } /* Operator */
-.highlight .p { color: #f8f8f2 } /* Punctuation */
-.highlight .ch { color: #75715e } /* Comment.Hashbang */
-.highlight .cm { color: #75715e } /* Comment.Multiline */
-.highlight .cp { color: #75715e } /* Comment.Preproc */
-.highlight .cpf { color: #75715e } /* Comment.PreprocFile */
-.highlight .c1 { color: #75715e } /* Comment.Single */
-.highlight .cs { color: #75715e } /* Comment.Special */
-.highlight .gd { color: #f92672 } /* Generic.Deleted */
-.highlight .ge { font-style: italic } /* Generic.Emph */
-.highlight .gi { color: #a6e22e } /* Generic.Inserted */
-.highlight .gs { font-weight: bold } /* Generic.Strong */
-.highlight .gu { color: #75715e } /* Generic.Subheading */
-.highlight .kc { color: #66d9ef } /* Keyword.Constant */
-.highlight .kd { color: #66d9ef } /* Keyword.Declaration */
-.highlight .kn { color: #f92672 } /* Keyword.Namespace */
-.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */
-.highlight .kr { color: #66d9ef } /* Keyword.Reserved */
-.highlight .kt { color: #66d9ef } /* Keyword.Type */
-.highlight .ld { color: #e6db74 } /* Literal.Date */
-.highlight .m { color: #ae81ff } /* Literal.Number */
-.highlight .s { color: #e6db74 } /* Literal.String */
-.highlight .na { color: #a6e22e } /* Name.Attribute */
-.highlight .nb { color: #f8f8f2 } /* Name.Builtin */
-.highlight .nc { color: #a6e22e } /* Name.Class */
-.highlight .no { color: #66d9ef } /* Name.Constant */
-.highlight .nd { color: #a6e22e } /* Name.Decorator */
-.highlight .ni { color: #f8f8f2 } /* Name.Entity */
-.highlight .ne { color: #a6e22e } /* Name.Exception */
-.highlight .nf { color: #a6e22e } /* Name.Function */
-.highlight .nl { color: #f8f8f2 } /* Name.Label */
-.highlight .nn { color: #f8f8f2 } /* Name.Namespace */
-.highlight .nx { color: #a6e22e } /* Name.Other */
-.highlight .py { color: #f8f8f2 } /* Name.Property */
-.highlight .nt { color: #f92672 } /* Name.Tag */
-.highlight .nv { color: #f8f8f2 } /* Name.Variable */
-.highlight .ow { color: #f92672 } /* Operator.Word */
-.highlight .w { color: #f8f8f2 } /* Text.Whitespace */
-.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */
-.highlight .mf { color: #ae81ff } /* Literal.Number.Float */
-.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */
-.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */
-.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */
-.highlight .sa { color: #e6db74 } /* Literal.String.Affix */
-.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */
-.highlight .sc { color: #e6db74 } /* Literal.String.Char */
-.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */
-.highlight .sd { color: #e6db74 } /* Literal.String.Doc */
-.highlight .s2 { color: #e6db74 } /* Literal.String.Double */
-.highlight .se { color: #ae81ff } /* Literal.String.Escape */
-.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */
-.highlight .si { color: #e6db74 } /* Literal.String.Interpol */
-.highlight .sx { color: #e6db74 } /* Literal.String.Other */
-.highlight .sr { color: #e6db74 } /* Literal.String.Regex */
-.highlight .s1 { color: #e6db74 } /* Literal.String.Single */
-.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */
-.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */
-.highlight .fm { color: #a6e22e } /* Name.Function.Magic */
-.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */
-.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */
-.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */
-.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */
-.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */
diff --git a/spaces/ChandraMohanNayal/AutoGPT/tests/integration/memory_tests.py b/spaces/ChandraMohanNayal/AutoGPT/tests/integration/memory_tests.py
deleted file mode 100644
index eead2da1cfa9b8a99592939623955808fc430068..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/tests/integration/memory_tests.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import random
-import string
-import sys
-import unittest
-from pathlib import Path
-
-from autogpt.config import Config
-from autogpt.memory.local import LocalCache
-
-
-class TestLocalCache(unittest.TestCase):
- def random_string(self, length):
- return "".join(random.choice(string.ascii_letters) for _ in range(length))
-
- def setUp(self):
- cfg = cfg = Config()
- self.cache = LocalCache(cfg)
- self.cache.clear()
-
- # Add example texts to the cache
- self.example_texts = [
- "The quick brown fox jumps over the lazy dog",
- "I love machine learning and natural language processing",
- "The cake is a lie, but the pie is always true",
- "ChatGPT is an advanced AI model for conversation",
- ]
-
- for text in self.example_texts:
- self.cache.add(text)
-
- # Add some random strings to test noise
- for _ in range(5):
- self.cache.add(self.random_string(10))
-
- def test_get_relevant(self):
- query = "I'm interested in artificial intelligence and NLP"
- k = 3
- relevant_texts = self.cache.get_relevant(query, k)
-
- print(f"Top {k} relevant texts for the query '{query}':")
- for i, text in enumerate(relevant_texts, start=1):
- print(f"{i}. {text}")
-
- self.assertEqual(len(relevant_texts), k)
- self.assertIn(self.example_texts[1], relevant_texts)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/app.py b/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/app.py
deleted file mode 100644
index 862b82915da9e0751cb28e50f6cd7e815f644f26..0000000000000000000000000000000000000000
--- a/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/Cletrason/toad-in-the-mario-movie").launch()
\ No newline at end of file
diff --git a/spaces/CodeDoes/FrostAura-gpt-neox-20b-fiction-novel-generation/app.py b/spaces/CodeDoes/FrostAura-gpt-neox-20b-fiction-novel-generation/app.py
deleted file mode 100644
index 773fda39ec9184ef2c26dfa444fc43cad0267824..0000000000000000000000000000000000000000
--- a/spaces/CodeDoes/FrostAura-gpt-neox-20b-fiction-novel-generation/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/FrostAura/gpt-neox-20b-fiction-novel-generation").launch()
\ No newline at end of file
diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/diffusionmodules/upscaling.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/diffusionmodules/upscaling.py
deleted file mode 100644
index 03816662098ce1ffac79bd939b892e867ab91988..0000000000000000000000000000000000000000
--- a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/diffusionmodules/upscaling.py
+++ /dev/null
@@ -1,81 +0,0 @@
-import torch
-import torch.nn as nn
-import numpy as np
-from functools import partial
-
-from ldm.modules.diffusionmodules.util import extract_into_tensor, make_beta_schedule
-from ldm.util import default
-
-
-class AbstractLowScaleModel(nn.Module):
- # for concatenating a downsampled image to the latent representation
- def __init__(self, noise_schedule_config=None):
- super(AbstractLowScaleModel, self).__init__()
- if noise_schedule_config is not None:
- self.register_schedule(**noise_schedule_config)
-
- def register_schedule(self, beta_schedule="linear", timesteps=1000,
- linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end,
- cosine_s=cosine_s)
- alphas = 1. - betas
- alphas_cumprod = np.cumprod(alphas, axis=0)
- alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
-
- timesteps, = betas.shape
- self.num_timesteps = int(timesteps)
- self.linear_start = linear_start
- self.linear_end = linear_end
- assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep'
-
- to_torch = partial(torch.tensor, dtype=torch.float32)
-
- self.register_buffer('betas', to_torch(betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
-
- def q_sample(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise)
-
- def forward(self, x):
- return x, None
-
- def decode(self, x):
- return x
-
-
-class SimpleImageConcat(AbstractLowScaleModel):
- # no noise level conditioning
- def __init__(self):
- super(SimpleImageConcat, self).__init__(noise_schedule_config=None)
- self.max_noise_level = 0
-
- def forward(self, x):
- # fix to constant noise level
- return x, torch.zeros(x.shape[0], device=x.device).long()
-
-
-class ImageConcatWithNoiseAugmentation(AbstractLowScaleModel):
- def __init__(self, noise_schedule_config, max_noise_level=1000, to_cuda=False):
- super().__init__(noise_schedule_config=noise_schedule_config)
- self.max_noise_level = max_noise_level
-
- def forward(self, x, noise_level=None):
- if noise_level is None:
- noise_level = torch.randint(0, self.max_noise_level, (x.shape[0],), device=x.device).long()
- else:
- assert isinstance(noise_level, torch.Tensor)
- z = self.q_sample(x, noise_level)
- return z, noise_level
-
-
-
diff --git a/spaces/CyberPeace-Institute/Cybersecurity-Knowledge-Graph-Extraction/app.py b/spaces/CyberPeace-Institute/Cybersecurity-Knowledge-Graph-Extraction/app.py
deleted file mode 100644
index 57c93354e058684071874caa3285990b8d4b8d24..0000000000000000000000000000000000000000
--- a/spaces/CyberPeace-Institute/Cybersecurity-Knowledge-Graph-Extraction/app.py
+++ /dev/null
@@ -1,103 +0,0 @@
-import streamlit as st
-from transformers import AutoModelForTokenClassification
-from annotated_text import annotated_text
-import numpy as np
-import os, joblib
-
-from utils import get_idxs_from_text
-
-model = AutoModelForTokenClassification.from_pretrained("CyberPeace-Institute/Cybersecurity-Knowledge-Graph", trust_remote_code=True)
-
-role_classifiers = {}
-folder_path = '/arg_role_models'
-for filename in os.listdir(os.getcwd() + folder_path):
- if filename.endswith('.joblib'):
- file_path = os.getcwd() + os.path.join(folder_path, filename)
- clf = joblib.load(file_path)
- arg = filename.split(".")[0]
- role_classifiers[arg] = clf
-
-def annotate(name):
- tokens = [item["token"] for item in output]
- tokens = [token.replace(" ", "") for token in tokens]
- text = model.tokenizer.decode([item["id"] for item in output])
- idxs = get_idxs_from_text(text, tokens)
- labels = [item[name] for item in output]
-
- annotated_text_list = []
- last_label = ""
- cumulative_tokens = ""
- last_id = 0
- for idx, label in zip(idxs, labels):
- to_label = label
- label_short = to_label.split("-")[1] if "-" in to_label else to_label
- if last_label == label_short:
- cumulative_tokens += text[last_id : idx["end_idx"]]
- last_id = idx["end_idx"]
- else:
- if last_label != "":
- if last_label == "O":
- annotated_text_list.append(cumulative_tokens)
- else:
- annotated_text_list.append((cumulative_tokens, last_label))
- last_label = label_short
- cumulative_tokens = idx["word"]
- last_id = idx["end_idx"]
- if last_label == "O":
- annotated_text_list.append(cumulative_tokens)
- else:
- annotated_text_list.append((cumulative_tokens, last_label))
- annotated_text(annotated_text_list)
-
-def get_arg_roles(output):
- args = [(idx, item["argument"], item["token"]) for idx, item in enumerate(output) if item["argument"]!= "O"]
-
- entities = []
- current_entity = None
- for position, label, token in args:
- if label.startswith('B-'):
- if current_entity is not None:
- entities.append(current_entity)
- current_entity = {'label': label[2:], 'text': token.replace(" ", ""), 'start': position, 'end': position}
- elif label.startswith('I-'):
- if current_entity is not None:
- current_entity['text'] += ' ' + token.replace(" ", "")
- current_entity['end'] = position
- for entity in entities:
- context = model.tokenizer.decode([item["id"] for item in output[max(0, entity["start"] - 15) : min(len(output), entity["end"] + 15)]])
- entity["context"] = context
-
- for entity in entities:
- if len(model.arg_2_role[entity["label"]]) > 1:
- sent_embed = model.embed_model.encode(entity["context"])
- arg_embed = model.embed_model.encode(entity["text"])
- embed = np.concatenate((sent_embed, arg_embed))
- arg_clf = role_classifiers[entity["label"]]
- role_id = arg_clf.predict(embed.reshape(1, -1))
- role = model.arg_2_role[entity["label"]][role_id[0]]
- entity["role"] = role
- else:
- entity["role"] = model.arg_2_role[entity["label"]][0]
-
- for item in output:
- item["role"] = "O"
- for entity in entities:
- for i in range(entity["start"], entity["end"] + 1):
- output[i]["role"] = entity["role"]
- return output
-
-st.title("Create Knowledge Graphs from Cyber Incidents")
-
-text_input = st.text_area("Enter your text here", height=100)
-
-if text_input or st.button('Apply'):
- output = model(text_input)
- st.subheader("Event Nuggets")
- annotate("nugget")
- st.subheader("Event Arguments")
- annotate("argument")
- st.subheader("Realis of Event Nuggets")
- annotate("realis")
- output = get_arg_roles(output)
- st.subheader("Role of the Event Arguments")
- annotate("role")
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/B_A_S_E_.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/B_A_S_E_.py
deleted file mode 100644
index f468a963a1e2a8d503b57f4d7aeff12b8770cc67..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/B_A_S_E_.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .otBase import BaseTTXConverter
-
-
-class table_B_A_S_E_(BaseTTXConverter):
- pass
diff --git a/spaces/DQChoi/image_sticker/README.md b/spaces/DQChoi/image_sticker/README.md
deleted file mode 100644
index 5e3bfca9d3ce44735bb84cfa90bd221ec65df946..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/image_sticker/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Image Sticker
-emoji: 💻
-colorFrom: pink
-colorTo: pink
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Dagfinn1962/prodia2/README.md b/spaces/Dagfinn1962/prodia2/README.md
deleted file mode 100644
index 890f16c85b7d8cd774a973c79fc40f17d6f07725..0000000000000000000000000000000000000000
--- a/spaces/Dagfinn1962/prodia2/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Prodia
-emoji: 🔥
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: pikto/prodia
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Datasculptor/StyleGAN-NADA/model/sg2_model.py b/spaces/Datasculptor/StyleGAN-NADA/model/sg2_model.py
deleted file mode 100644
index 76b6f2bf128a5771c69dd7ce0e28362f1d82e30c..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/StyleGAN-NADA/model/sg2_model.py
+++ /dev/null
@@ -1,817 +0,0 @@
-import math
-import random
-import functools
-import operator
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torch.autograd import Function
-
-from op import conv2d_gradfix
-
-if torch.cuda.is_available():
- from op.fused_act import FusedLeakyReLU, fused_leaky_relu
- from op.upfirdn2d import upfirdn2d
-else:
- from op.fused_act_cpu import FusedLeakyReLU, fused_leaky_relu
- from op.upfirdn2d_cpu import upfirdn2d
-
-
-class PixelNorm(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, input):
- return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8)
-
-
-def make_kernel(k):
- k = torch.tensor(k, dtype=torch.float32)
-
- if k.ndim == 1:
- k = k[None, :] * k[:, None]
-
- k /= k.sum()
-
- return k
-
-
-class Upsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel) * (factor ** 2)
- self.register_buffer("kernel", kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad)
-
- return out
-
-
-class Downsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel)
- self.register_buffer("kernel", kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad)
-
- return out
-
-
-class Blur(nn.Module):
- def __init__(self, kernel, pad, upsample_factor=1):
- super().__init__()
-
- kernel = make_kernel(kernel)
-
- if upsample_factor > 1:
- kernel = kernel * (upsample_factor ** 2)
-
- self.register_buffer("kernel", kernel)
-
- self.pad = pad
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, pad=self.pad)
-
- return out
-
-
-class EqualConv2d(nn.Module):
- def __init__(
- self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True
- ):
- super().__init__()
-
- self.weight = nn.Parameter(
- torch.randn(out_channel, in_channel, kernel_size, kernel_size)
- )
- self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2)
-
- self.stride = stride
- self.padding = padding
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_channel))
-
- else:
- self.bias = None
-
- def forward(self, input):
- out = conv2d_gradfix.conv2d(
- input,
- self.weight * self.scale,
- bias=self.bias,
- stride=self.stride,
- padding=self.padding,
- )
-
- return out
-
- def __repr__(self):
- return (
- f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},"
- f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})"
- )
-
-
-class EqualLinear(nn.Module):
- def __init__(
- self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None
- ):
- super().__init__()
-
- self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul))
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init))
-
- else:
- self.bias = None
-
- self.activation = activation
-
- self.scale = (1 / math.sqrt(in_dim)) * lr_mul
- self.lr_mul = lr_mul
-
- def forward(self, input):
- if self.activation:
- out = F.linear(input, self.weight * self.scale)
- out = fused_leaky_relu(out, self.bias * self.lr_mul)
-
- else:
- out = F.linear(
- input, self.weight * self.scale, bias=self.bias * self.lr_mul
- )
-
- return out
-
- def __repr__(self):
- return (
- f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})"
- )
-
-
-class ModulatedConv2d(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- demodulate=True,
- upsample=False,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- fused=True,
- ):
- super().__init__()
-
- self.eps = 1e-8
- self.kernel_size = kernel_size
- self.in_channel = in_channel
- self.out_channel = out_channel
- self.upsample = upsample
- self.downsample = downsample
-
- if upsample:
- factor = 2
- p = (len(blur_kernel) - factor) - (kernel_size - 1)
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2 + 1
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor)
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1))
-
- fan_in = in_channel * kernel_size ** 2
- self.scale = 1 / math.sqrt(fan_in)
- self.padding = kernel_size // 2
-
- self.weight = nn.Parameter(
- torch.randn(1, out_channel, in_channel, kernel_size, kernel_size)
- )
-
- self.modulation = EqualLinear(style_dim, in_channel, bias_init=1)
-
- self.demodulate = demodulate
- self.fused = fused
-
- def __repr__(self):
- return (
- f"{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, "
- f"upsample={self.upsample}, downsample={self.downsample})"
- )
-
- def forward(self, input, style, is_s_code=False):
- batch, in_channel, height, width = input.shape
-
- if not self.fused:
- weight = self.scale * self.weight.squeeze(0)
-
- if is_s_code:
- style = style[self.modulation]
- else:
- style = self.modulation(style)
-
- if self.demodulate:
- w = weight.unsqueeze(0) * style.view(batch, 1, in_channel, 1, 1)
- dcoefs = (w.square().sum((2, 3, 4)) + 1e-8).rsqrt()
-
- input = input * style.reshape(batch, in_channel, 1, 1)
-
- if self.upsample:
- weight = weight.transpose(0, 1)
- out = conv2d_gradfix.conv_transpose2d(
- input, weight, padding=0, stride=2
- )
- out = self.blur(out)
-
- elif self.downsample:
- input = self.blur(input)
- out = conv2d_gradfix.conv2d(input, weight, padding=0, stride=2)
-
- else:
- out = conv2d_gradfix.conv2d(input, weight, padding=self.padding)
-
- if self.demodulate:
- out = out * dcoefs.view(batch, -1, 1, 1)
-
- return out
-
- if is_s_code:
- style = style[self.modulation]
- else:
- style = self.modulation(style)
-
- style = style.view(batch, 1, in_channel, 1, 1)
- weight = self.scale * self.weight * style
-
- if self.demodulate:
- demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8)
- weight = weight * demod.view(batch, self.out_channel, 1, 1, 1)
-
- weight = weight.view(
- batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
-
- if self.upsample:
- input = input.view(1, batch * in_channel, height, width)
- weight = weight.view(
- batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
- weight = weight.transpose(1, 2).reshape(
- batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size
- )
- out = conv2d_gradfix.conv_transpose2d(
- input, weight, padding=0, stride=2, groups=batch
- )
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
- out = self.blur(out)
-
- elif self.downsample:
- input = self.blur(input)
- _, _, height, width = input.shape
- input = input.view(1, batch * in_channel, height, width)
- out = conv2d_gradfix.conv2d(
- input, weight, padding=0, stride=2, groups=batch
- )
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- else:
- input = input.view(1, batch * in_channel, height, width)
- out = conv2d_gradfix.conv2d(
- input, weight, padding=self.padding, groups=batch
- )
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- return out
-
-
-class NoiseInjection(nn.Module):
- def __init__(self):
- super().__init__()
-
- self.weight = nn.Parameter(torch.zeros(1))
-
- def forward(self, image, noise=None):
- if noise is None:
- batch, _, height, width = image.shape
- noise = image.new_empty(batch, 1, height, width).normal_()
-
- return image + self.weight * noise
-
-
-class ConstantInput(nn.Module):
- def __init__(self, channel, size=4):
- super().__init__()
-
- self.input = nn.Parameter(torch.randn(1, channel, size, size))
-
- def forward(self, input, is_s_code=False):
- if not is_s_code:
- batch = input.shape[0]
- else:
- batch = next(iter(input.values())).shape[0]
-
- out = self.input.repeat(batch, 1, 1, 1)
-
- return out
-
-
-class StyledConv(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=False,
- blur_kernel=[1, 3, 3, 1],
- demodulate=True,
- ):
- super().__init__()
-
- self.conv = ModulatedConv2d(
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=upsample,
- blur_kernel=blur_kernel,
- demodulate=demodulate,
- )
-
- self.noise = NoiseInjection()
- # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1))
- # self.activate = ScaledLeakyReLU(0.2)
- self.activate = FusedLeakyReLU(out_channel)
-
- def forward(self, input, style, noise=None, is_s_code=False):
- out = self.conv(input, style, is_s_code=is_s_code)
- out = self.noise(out, noise=noise)
- # out = out + self.bias
- out = self.activate(out)
-
- return out
-
-
-class ToRGB(nn.Module):
- def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- if upsample:
- self.upsample = Upsample(blur_kernel)
-
- self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False)
- self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
-
- def forward(self, input, style, skip=None, is_s_code=False):
- out = self.conv(input, style, is_s_code=is_s_code)
- out = out + self.bias
-
- if skip is not None:
- skip = self.upsample(skip)
-
- out = out + skip
-
- return out
-
-
-class Generator(nn.Module):
- def __init__(
- self,
- size,
- style_dim,
- n_mlp,
- channel_multiplier=2,
- blur_kernel=[1, 3, 3, 1],
- lr_mlp=0.01,
- ):
- super().__init__()
-
- self.size = size
-
- self.style_dim = style_dim
-
- layers = [PixelNorm()]
-
- for i in range(n_mlp):
- layers.append(
- EqualLinear(
- style_dim, style_dim, lr_mul=lr_mlp, activation="fused_lrelu"
- )
- )
-
- self.style = nn.Sequential(*layers)
-
- self.channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- self.input = ConstantInput(self.channels[4])
- self.conv1 = StyledConv(
- self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel
- )
- self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False)
-
- self.log_size = int(math.log(size, 2))
- self.num_layers = (self.log_size - 2) * 2 + 1
-
- self.convs = nn.ModuleList()
- self.upsamples = nn.ModuleList()
- self.to_rgbs = nn.ModuleList()
- self.noises = nn.Module()
-
- in_channel = self.channels[4]
-
- for layer_idx in range(self.num_layers):
- res = (layer_idx + 5) // 2
- shape = [1, 1, 2 ** res, 2 ** res]
- self.noises.register_buffer(f"noise_{layer_idx}", torch.randn(*shape))
-
- for i in range(3, self.log_size + 1):
- out_channel = self.channels[2 ** i]
-
- self.convs.append(
- StyledConv(
- in_channel,
- out_channel,
- 3,
- style_dim,
- upsample=True,
- blur_kernel=blur_kernel,
- )
- )
-
- self.convs.append(
- StyledConv(
- out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel
- )
- )
-
- self.to_rgbs.append(ToRGB(out_channel, style_dim))
-
- in_channel = out_channel
-
- self.n_latent = self.log_size * 2 - 2
-
-
- self.modulation_layers = [self.conv1.conv.modulation, self.to_rgb1.conv.modulation] + \
- [layer.conv.modulation for layer in self.convs] + \
- [layer.conv.modulation for layer in self.to_rgbs]
-
- def make_noise(self):
- device = self.input.input.device
-
- noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)]
-
- for i in range(3, self.log_size + 1):
- for _ in range(2):
- noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device))
-
- return noises
-
- def mean_latent(self, n_latent):
- latent_in = torch.randn(
- n_latent, self.style_dim, device=self.input.input.device
- )
- latent = self.style(latent_in).mean(0, keepdim=True)
-
- return latent
-
- def get_latent(self, input):
- return self.style(input)
-
- def get_s_code(self, styles, input_is_latent):
-
- if not input_is_latent:
- styles = [self.style(s) for s in styles]
-
- s_codes = [{# const block
- self.modulation_layers[0]: self.modulation_layers[0](style[:, 0]), #s0
- self.modulation_layers[1]: self.modulation_layers[1](style[:, 1]), #s1
- # conv layers
- self.modulation_layers[2]: self.modulation_layers[2](style[:, 1]), #s2
- self.modulation_layers[3]: self.modulation_layers[3](style[:, 2]), #s3
- self.modulation_layers[4]: self.modulation_layers[4](style[:, 3]), #s5
- self.modulation_layers[5]: self.modulation_layers[5](style[:, 4]), #s6
- self.modulation_layers[6]: self.modulation_layers[6](style[:, 5]), #s8
- self.modulation_layers[7]: self.modulation_layers[7](style[:, 6]), #s9
- self.modulation_layers[8]: self.modulation_layers[8](style[:, 7]), #s11
- self.modulation_layers[9]: self.modulation_layers[9](style[:, 8]), #s12
- self.modulation_layers[10]: self.modulation_layers[10](style[:, 9]), #s14
- self.modulation_layers[11]: self.modulation_layers[11](style[:, 10]), #s15
- self.modulation_layers[12]: self.modulation_layers[12](style[:, 11]), #s17
- self.modulation_layers[13]: self.modulation_layers[13](style[:, 12]), #s18
- self.modulation_layers[14]: self.modulation_layers[14](style[:, 13]), #s20
- self.modulation_layers[15]: self.modulation_layers[15](style[:, 14]), #s21
- self.modulation_layers[16]: self.modulation_layers[16](style[:, 15]), #s23
- self.modulation_layers[17]: self.modulation_layers[17](style[:, 16]), #s24
- # toRGB layers
- self.modulation_layers[18]: self.modulation_layers[18](style[:, 3]), #s4
- self.modulation_layers[19]: self.modulation_layers[19](style[:, 5]), #s7
- self.modulation_layers[20]: self.modulation_layers[20](style[:, 7]), #s10
- self.modulation_layers[21]: self.modulation_layers[21](style[:, 9]), #s13
- self.modulation_layers[22]: self.modulation_layers[22](style[:, 11]), #s16
- self.modulation_layers[23]: self.modulation_layers[23](style[:, 13]), #s19
- self.modulation_layers[24]: self.modulation_layers[24](style[:, 15]), #s22
- self.modulation_layers[25]: self.modulation_layers[25](style[:, 17]), #s25
- } for style in styles]
-
- return s_codes
-
-
- def forward(
- self,
- styles,
- return_latents=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- input_is_s_code=False,
- noise=None,
- randomize_noise=True,
- ):
- if not input_is_s_code:
- return self.forward_with_w(styles, return_latents, inject_index, truncation, truncation_latent, input_is_latent, noise, randomize_noise)
-
- return self.forward_with_s(styles, return_latents, noise, randomize_noise)
-
- def forward_with_w(
- self,
- styles,
- return_latents=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- noise=None,
- randomize_noise=True,
- ):
- if not input_is_latent:
- styles = [self.style(s) for s in styles]
-
- if noise is None:
- if randomize_noise:
- noise = [None] * self.num_layers
- else:
- noise = [
- getattr(self.noises, f"noise_{i}") for i in range(self.num_layers)
- ]
-
- if truncation < 1:
- style_t = []
-
- for style in styles:
- style_t.append(
- truncation_latent + truncation * (style - truncation_latent)
- )
-
- styles = style_t
-
- if len(styles) < 2:
- inject_index = self.n_latent
-
- if styles[0].ndim < 3:
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
-
- else:
- latent = styles[0]
-
- else:
- if inject_index is None:
- inject_index = random.randint(1, self.n_latent - 1)
-
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1)
-
- latent = torch.cat([latent, latent2], 1)
-
- out = self.input(latent)
- out = self.conv1(out, latent[:, 0], noise=noise[0])
-
- skip = self.to_rgb1(out, latent[:, 1])
-
- i = 1
- for conv1, conv2, noise1, noise2, to_rgb in zip(
- self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs
- ):
- out = conv1(out, latent[:, i], noise=noise1)
- out = conv2(out, latent[:, i + 1], noise=noise2)
- skip = to_rgb(out, latent[:, i + 2], skip)
-
- i += 2
-
- image = skip
-
- if return_latents:
- return image, latent
-
- else:
- return image, None
-
- def forward_with_s(
- self,
- styles,
- return_latents=False,
- noise=None,
- randomize_noise=True,
- ):
-
- if noise is None:
- if randomize_noise:
- noise = [None] * self.num_layers
- else:
- noise = [
- getattr(self.noises, f"noise_{i}") for i in range(self.num_layers)
- ]
-
- out = self.input(styles, is_s_code=True)
- out = self.conv1(out, styles, is_s_code=True, noise=noise[0])
-
- skip = self.to_rgb1(out, styles, is_s_code=True)
-
- i = 1
- for conv1, conv2, noise1, noise2, to_rgb in zip(
- self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs
- ):
- out = conv1(out, styles, is_s_code=True, noise=noise1)
- out = conv2(out, styles, is_s_code=True, noise=noise2)
- skip = to_rgb(out, styles, skip, is_s_code=True)
-
- i += 2
-
- image = skip
-
- if return_latents:
- return image, styles
-
- else:
- return image, None
-
-class ConvLayer(nn.Sequential):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- bias=True,
- activate=True,
- ):
- layers = []
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- layers.append(Blur(blur_kernel, pad=(pad0, pad1)))
-
- stride = 2
- self.padding = 0
-
- else:
- stride = 1
- self.padding = kernel_size // 2
-
- layers.append(
- EqualConv2d(
- in_channel,
- out_channel,
- kernel_size,
- padding=self.padding,
- stride=stride,
- bias=bias and not activate,
- )
- )
-
- if activate:
- layers.append(FusedLeakyReLU(out_channel, bias=bias))
-
- super().__init__(*layers)
-
-
-class ResBlock(nn.Module):
- def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- self.conv1 = ConvLayer(in_channel, in_channel, 3)
- self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True)
-
- self.skip = ConvLayer(
- in_channel, out_channel, 1, downsample=True, activate=False, bias=False
- )
-
- def forward(self, input):
- out = self.conv1(input)
- out = self.conv2(out)
-
- skip = self.skip(input)
- out = (out + skip) / math.sqrt(2)
-
- return out
-
-
-class Discriminator(nn.Module):
- def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- convs = [ConvLayer(3, channels[size], 1)]
-
- log_size = int(math.log(size, 2))
-
- in_channel = channels[size]
-
- for i in range(log_size, 2, -1):
- out_channel = channels[2 ** (i - 1)]
-
- convs.append(ResBlock(in_channel, out_channel, blur_kernel))
-
- in_channel = out_channel
-
- self.convs = nn.Sequential(*convs)
-
- self.stddev_group = 4
- self.stddev_feat = 1
-
- self.final_conv = ConvLayer(in_channel + 1, channels[4], 3)
- self.final_linear = nn.Sequential(
- EqualLinear(channels[4] * 4 * 4, channels[4], activation="fused_lrelu"),
- EqualLinear(channels[4], 1),
- )
-
- def forward(self, input):
- out = self.convs(input)
-
- batch, channel, height, width = out.shape
- group = min(batch, self.stddev_group)
- stddev = out.view(
- group, -1, self.stddev_feat, channel // self.stddev_feat, height, width
- )
- stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)
- stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2)
- stddev = stddev.repeat(group, 1, height, width)
- out = torch.cat([out, stddev], 1)
-
- out = self.final_conv(out)
-
- out = out.view(batch, -1)
- out = self.final_linear(out)
-
- return out
-
diff --git a/spaces/Detomo/ai-avatar-frontend/Dockerfile b/spaces/Detomo/ai-avatar-frontend/Dockerfile
deleted file mode 100644
index d23e1cc0d787e3a8c5434d2613603a8fd71967d2..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-avatar-frontend/Dockerfile
+++ /dev/null
@@ -1,20 +0,0 @@
-# Sử dụng image Node.js phiên bản mới nhất
-FROM node:latest
-
-# Thiết lập thư mục làm việc trong container
-WORKDIR /app
-
-# Sao chép package.json và yarn.lock vào container
-COPY package.json yarn.lock ./
-
-# Cài đặt các gói phụ thuộc bằng Yarn
-RUN yarn install
-
-# Sao chép toàn bộ mã nguồn và các tệp khác vào container
-COPY . .
-
-# Expose port (Bạn cần xác định cổng mà ứng dụng của bạn đang chạy, ví dụ: 3000)
-EXPOSE 3000
-
-# Chạy ứng dụng khi khởi động container
-CMD ["yarn", "start"]
\ No newline at end of file
diff --git a/spaces/DonDoesStuff/sd_xl_base_0.9/README.md b/spaces/DonDoesStuff/sd_xl_base_0.9/README.md
deleted file mode 100644
index fec0cf7ea01c07ea38cfe784718b6bf02ac20fdf..0000000000000000000000000000000000000000
--- a/spaces/DonDoesStuff/sd_xl_base_0.9/README.md
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title: DreamlikeArt-PhotoReal 2.0
-emoji: 🧘🏻♀️
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.16.1
-app_file: app.py
-pinned: false
-duplicated_from: phenomenon1981/DreamlikeArt-PhotoReal-2.0
----
----
-title: DreamlikeArt-PhotoReal 2.0
-emoji: 🧘🏻♀️
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.16.1
-app_file: app.py
\ No newline at end of file
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/GetCode.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/GetCode.py
deleted file mode 100644
index 62e64dc8cbc5ad2bb16aef5da8f6d41c26b24170..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/GetCode.py
+++ /dev/null
@@ -1,232 +0,0 @@
-
-
-
-import os
-import pickle
-import numpy as np
-from dnnlib import tflib
-import tensorflow as tf
-
-import argparse
-
-def LoadModel(dataset_name):
- # Initialize TensorFlow.
- tflib.init_tf()
- model_path='./model/'
- model_name=dataset_name+'.pkl'
-
- tmp=os.path.join(model_path,model_name)
- with open(tmp, 'rb') as f:
- _, _, Gs = pickle.load(f)
- return Gs
-
-def lerp(a,b,t):
- return a + (b - a) * t
-
-#stylegan-ada
-def SelectName(layer_name,suffix):
- if suffix==None:
- tmp1='add:0' in layer_name
- tmp2='shape=(?,' in layer_name
- tmp4='G_synthesis_1' in layer_name
- tmp= tmp1 and tmp2 and tmp4
- else:
- tmp1=('/Conv0_up'+suffix) in layer_name
- tmp2=('/Conv1'+suffix) in layer_name
- tmp3=('4x4/Conv'+suffix) in layer_name
- tmp4='G_synthesis_1' in layer_name
- tmp5=('/ToRGB'+suffix) in layer_name
- tmp= (tmp1 or tmp2 or tmp3 or tmp5) and tmp4
- return tmp
-
-
-def GetSNames(suffix):
- #get style tensor name
- with tf.Session() as sess:
- op = sess.graph.get_operations()
- layers=[m.values() for m in op]
-
-
- select_layers=[]
- for layer in layers:
- layer_name=str(layer)
- if SelectName(layer_name,suffix):
- select_layers.append(layer[0])
- return select_layers
-
-def SelectName2(layer_name):
- tmp1='mod_bias' in layer_name
- tmp2='mod_weight' in layer_name
- tmp3='ToRGB' in layer_name
-
- tmp= (tmp1 or tmp2) and (not tmp3)
- return tmp
-
-def GetKName(Gs):
-
- layers=[var for name, var in Gs.components.synthesis.vars.items()]
-
- select_layers=[]
- for layer in layers:
- layer_name=str(layer)
- if SelectName2(layer_name):
- select_layers.append(layer)
- return select_layers
-
-def GetCode(Gs,random_state,num_img,num_once,dataset_name):
- rnd = np.random.RandomState(random_state) #5
-
- truncation_psi=0.7
- truncation_cutoff=8
-
- dlatent_avg=Gs.get_var('dlatent_avg')
-
- dlatents=np.zeros((num_img,512),dtype='float32')
- for i in range(int(num_img/num_once)):
- src_latents = rnd.randn(num_once, Gs.input_shape[1])
- src_dlatents = Gs.components.mapping.run(src_latents, None) # [seed, layer, component]
-
- # Apply truncation trick.
- if truncation_psi is not None and truncation_cutoff is not None:
- layer_idx = np.arange(src_dlatents.shape[1])[np.newaxis, :, np.newaxis]
- ones = np.ones(layer_idx.shape, dtype=np.float32)
- coefs = np.where(layer_idx < truncation_cutoff, truncation_psi * ones, ones)
- src_dlatents_np=lerp(dlatent_avg, src_dlatents, coefs)
- src_dlatents=src_dlatents_np[:,0,:].astype('float32')
- dlatents[(i*num_once):((i+1)*num_once),:]=src_dlatents
- print('get all z and w')
-
- tmp='./npy/'+dataset_name+'/W'
- np.save(tmp,dlatents)
-
-
-def GetImg(Gs,num_img,num_once,dataset_name,save_name='images'):
- print('Generate Image')
- tmp='./npy/'+dataset_name+'/W.npy'
- dlatents=np.load(tmp)
- fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True)
-
- all_images=[]
- for i in range(int(num_img/num_once)):
- print(i)
- images=[]
- for k in range(num_once):
- tmp=dlatents[i*num_once+k]
- tmp=tmp[None,None,:]
- tmp=np.tile(tmp,(1,Gs.components.synthesis.input_shape[1],1))
- image2= Gs.components.synthesis.run(tmp, randomize_noise=False, output_transform=fmt)
- images.append(image2)
-
- images=np.concatenate(images)
-
- all_images.append(images)
-
- all_images=np.concatenate(all_images)
-
- tmp='./npy/'+dataset_name+'/'+save_name
- np.save(tmp,all_images)
-
-def GetS(dataset_name,num_img):
- print('Generate S')
- tmp='./npy/'+dataset_name+'/W.npy'
- dlatents=np.load(tmp)[:num_img]
-
- with tf.Session() as sess:
- init = tf.global_variables_initializer()
- sess.run(init)
-
- Gs=LoadModel(dataset_name)
- Gs.print_layers() #for ada
- select_layers1=GetSNames(suffix=None) #None,'/mul_1:0','/mod_weight/read:0','/MatMul:0'
- dlatents=dlatents[:,None,:]
- dlatents=np.tile(dlatents,(1,Gs.components.synthesis.input_shape[1],1))
-
- all_s = sess.run(
- select_layers1,
- feed_dict={'G_synthesis_1/dlatents_in:0': dlatents})
-
- layer_names=[layer.name for layer in select_layers1]
- save_tmp=[layer_names,all_s]
- return save_tmp
-
-
-
-
-def convert_images_to_uint8(images, drange=[-1,1], nchw_to_nhwc=False):
- """Convert a minibatch of images from float32 to uint8 with configurable dynamic range.
- Can be used as an output transformation for Network.run().
- """
- if nchw_to_nhwc:
- images = np.transpose(images, [0, 2, 3, 1])
-
- scale = 255 / (drange[1] - drange[0])
- images = images * scale + (0.5 - drange[0] * scale)
-
- np.clip(images, 0, 255, out=images)
- images=images.astype('uint8')
- return images
-
-
-def GetCodeMS(dlatents):
- m=[]
- std=[]
- for i in range(len(dlatents)):
- tmp= dlatents[i]
- tmp_mean=tmp.mean(axis=0)
- tmp_std=tmp.std(axis=0)
- m.append(tmp_mean)
- std.append(tmp_std)
- return m,std
-
-
-
-#%%
-if __name__ == "__main__":
-
-
- parser = argparse.ArgumentParser(description='Process some integers.')
-
- parser.add_argument('--dataset_name',type=str,default='ffhq',
- help='name of dataset, for example, ffhq')
- parser.add_argument('--code_type',choices=['w','s','s_mean_std'],default='w')
-
- args = parser.parse_args()
- random_state=5
- num_img=100_000
- num_once=1_000
- dataset_name=args.dataset_name
-
- if not os.path.isfile('./model/'+dataset_name+'.pkl'):
- url='https://nvlabs-fi-cdn.nvidia.com/stylegan2/networks/'
- name='stylegan2-'+dataset_name+'-config-f.pkl'
- os.system('wget ' +url+name + ' -P ./model/')
- os.system('mv ./model/'+name+' ./model/'+dataset_name+'.pkl')
-
- if not os.path.isdir('./npy/'+dataset_name):
- os.system('mkdir ./npy/'+dataset_name)
-
- if args.code_type=='w':
- Gs=LoadModel(dataset_name=dataset_name)
- GetCode(Gs,random_state,num_img,num_once,dataset_name)
-# GetImg(Gs,num_img=num_img,num_once=num_once,dataset_name=dataset_name,save_name='images_100K') #no need
- elif args.code_type=='s':
- save_name='S'
- save_tmp=GetS(dataset_name,num_img=2_000)
- tmp='./npy/'+dataset_name+'/'+save_name
- with open(tmp, "wb") as fp:
- pickle.dump(save_tmp, fp)
-
- elif args.code_type=='s_mean_std':
- save_tmp=GetS(dataset_name,num_img=num_img)
- dlatents=save_tmp[1]
- m,std=GetCodeMS(dlatents)
- save_tmp=[m,std]
- save_name='S_mean_std'
- tmp='./npy/'+dataset_name+'/'+save_name
- with open(tmp, "wb") as fp:
- pickle.dump(save_tmp, fp)
-
-
-
-
-
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_configs/paths_config.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_configs/paths_config.py
deleted file mode 100644
index 282741a6c77bab1a0f6970eef590eeed16924b05..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_configs/paths_config.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import os
-
-# Pretrained models paths
-e4e = './pti/e4e_w+.pt'
-stylegan2_ada_shhq = './pretrained_models/stylegan_human_v2_1024.pkl'
-ir_se50 = '' # './model_ir_se50.pth'
-
-# Dirs for output files
-checkpoints_dir = './outputs/pti/checkpoints/'
-embedding_base_dir = './outputs/pti/embeddings'
-experiments_output_dir = './outputs/pti/'
-
-# Input info
-# Input dir, where the images reside
-input_data_path = 'aligned_image/'
-# Inversion identifier, used to keeping track of the inversion results. Both the latent code and the generator
-input_data_id = 'test'
-
-# Keywords
-pti_results_keyword = 'PTI'
-e4e_results_keyword = 'e4e'
-sg2_results_keyword = 'SG2'
-sg2_plus_results_keyword = 'SG2_Plus'
-multi_id_model_type = 'multi_id'
diff --git a/spaces/EdBianchi/JustMovie/README.md b/spaces/EdBianchi/JustMovie/README.md
deleted file mode 100644
index 1e295fe4eda457b306b68c1b04ad84ef33a31955..0000000000000000000000000000000000000000
--- a/spaces/EdBianchi/JustMovie/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Just Movie, what’s next?
-emoji: 🍿
-colorFrom: pink
-colorTo: green
-sdk: streamlit
-sdk_version: 1.2.0
-app_file: app.py
-pinned: false
-license: gpl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/ElAnon/nsumr/README.md b/spaces/ElAnon/nsumr/README.md
deleted file mode 100644
index a05299aa521787f7b5115d661fa42a6520a68fe3..0000000000000000000000000000000000000000
--- a/spaces/ElAnon/nsumr/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Nsumr
-emoji: 🚀
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.4.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/FaceOnLive/ID-Document-Recognition-SDK/ocrengine/ocrengine.py b/spaces/FaceOnLive/ID-Document-Recognition-SDK/ocrengine/ocrengine.py
deleted file mode 100644
index fb28b3a66735f0b7fbdf7642df1c7c721e6cd396..0000000000000000000000000000000000000000
--- a/spaces/FaceOnLive/ID-Document-Recognition-SDK/ocrengine/ocrengine.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import ctypes, ctypes.util
-from ctypes import *
-from numpy.ctypeslib import ndpointer
-import sys
-import os
-
-dll_path = os.path.abspath(os.path.dirname(__file__)) + '/libttvocrengine.so'
-ocr_engine = cdll.LoadLibrary(dll_path)
-
-TTVOcrInit = ocr_engine.TTVOcrInit
-TTVOcrInit.argtypes = [ctypes.c_char_p]
-TTVOcrInit.restype = ctypes.c_char_p
-
-TTVOcrProcess = ocr_engine.TTVOcrProcess
-TTVOcrProcess.argtypes = [ctypes.c_char_p, ctypes.c_char_p]
-TTVOcrProcess.restype = ctypes.c_char_p
-
-TTVOcrCreditCard = ocr_engine.TTVOcrCreditCard
-TTVOcrCreditCard.argtypes = [ctypes.c_char_p]
-TTVOcrCreditCard.restype = ctypes.c_char_p
-
-TTVOcrBarCode = ocr_engine.TTVOcrBarCode
-TTVOcrBarCode.argtypes = [ctypes.c_char_p]
-TTVOcrBarCode.restype = ctypes.c_char_p
-
-TTVOcrGetHWID = ocr_engine.TTVOcrGetHWID
-TTVOcrGetHWID.argtypes = []
-TTVOcrGetHWID.restype = ctypes.c_char_p
-
-TTVOcrSetActivation = ocr_engine.TTVOcrSetActivation
-TTVOcrSetActivation.argtypes = []
-TTVOcrSetActivation.restype = ctypes.c_char_p
-
-dll_path = os.path.abspath(os.path.dirname(__file__)) + '/libttvifchecker.so'
-if_engine = cdll.LoadLibrary(dll_path)
-
-ttv_if_checker = if_engine.ttv_if_checker
-ttv_if_checker.argtypes = [ctypes.c_char_p]
-ttv_if_checker.restype = ctypes.c_int32
diff --git a/spaces/Felladrin/Web-LLM-Mistral-7B-OpenOrca/dist/worker.6af90c76.js b/spaces/Felladrin/Web-LLM-Mistral-7B-OpenOrca/dist/worker.6af90c76.js
deleted file mode 100644
index e6723374a275ea480864a907aea557e3c997344a..0000000000000000000000000000000000000000
--- a/spaces/Felladrin/Web-LLM-Mistral-7B-OpenOrca/dist/worker.6af90c76.js
+++ /dev/null
@@ -1,1261 +0,0 @@
-(()=>{var A,Q,B,g,I,C,E=globalThis,D={},w={},o=E.parcelRequireba71;null==o&&((o=function(A){if(A in D)return D[A].exports;if(A in w){var Q=w[A];delete w[A];var B={id:A,exports:{}};return D[A]=B,Q.call(B.exports,B,B.exports),B.exports}var g=Error("Cannot find module '"+A+"'");throw g.code="MODULE_NOT_FOUND",g}).register=function(A,Q){w[A]=Q},E.parcelRequireba71=o);var N=o.register;N("dXuV6",function(A,Q){var B=o("hXHVm");function g(){this.protocol=null,this.slashes=null,this.auth=null,this.host=null,this.port=null,this.hostname=null,this.hash=null,this.search=null,this.query=null,this.pathname=null,this.path=null,this.href=null}// Reference: RFC 3986, RFC 1808, RFC 2396
-/*
- * define these here so at least they only have to be
- * compiled once on the first module load.
- */var I=/^([a-z0-9.+-]+:)/i,C=/:[0-9]*$/,E=/^(\/\/?(?!\/)[^?\s]*)(\?[^\s]*)?$/,D=["'"].concat(["{","}","|","\\","^","`"].concat(["<",">",'"',"`"," ","\r","\n"," "])),/*
- * Characters that are never ever allowed in a hostname.
- * Note that any invalid chars are also handled, but these
- * are the ones that are *expected* to be seen, so we fast-path
- * them.
- */w=["%","/","?",";","#"].concat(D),N=["/","?","#"],i=/^[+a-z0-9A-Z_-]{0,63}$/,G=/^([+a-z0-9A-Z_-]{0,63})(.*)$/,Y={javascript:!0,"javascript:":!0},F={javascript:!0,"javascript:":!0},h={http:!0,https:!0,ftp:!0,gopher:!0,file:!0,"http:":!0,"https:":!0,"ftp:":!0,"gopher:":!0,"file:":!0},M=o("vur5G");g.prototype.parse=function(A,Q,g){if("string"!=typeof A)throw TypeError("Parameter 'url' must be a string, not "+typeof A);/*
- * Copy chrome, IE, opera backslash-handling behavior.
- * Back slashes before the query string get converted to forward slashes
- * See: https://code.google.com/p/chromium/issues/detail?id=25916
- */var C=A.indexOf("?"),o=-1!==C&&C127?/*
- * we replace non-ASCII char with a temporary placeholder
- * we need this to make sure size of hostname is not
- * broken by replacing non-ASCII by nothing
- */e+="x":e+=f[r];// we test again with ASCII char only
-if(!e.match(i)){var n=t.slice(0,a),m=t.slice(a+1),j=f.match(G);j&&(n.push(j[1]),m.unshift(j[2])),m.length&&(s="/"+m.join(".")+s),this.hostname=n.join(".");break}}}this.hostname.length>255?this.hostname="":this.hostname=this.hostname.toLowerCase(),S||/*
- * IDNA Support: Returns a punycoded representation of "domain".
- * It only converts parts of the domain name that
- * have non-ASCII characters, i.e. it doesn't matter if
- * you call it with a domain that already is ASCII-only.
- */(this.hostname=B.toASCII(this.hostname));var q=this.port?":"+this.port:"",p=this.hostname||"";this.host=p+q,this.href+=this.host,S&&(this.hostname=this.hostname.substr(1,this.hostname.length-2),"/"!==s[0]&&(s="/"+s))}/*
- * now rest is set to the post-host stuff.
- * chop off any delim chars.
- */if(!Y[y])/*
- * First, make 100% sure that any "autoEscape" chars get
- * escaped, even if encodeURIComponent doesn't think they
- * need to be.
- */for(var a=0,O=D.length;a0)&&B.host.split("@");c&&(B.auth=c.shift(),B.hostname=c.shift(),B.host=B.hostname)}return B.search=A.search,B.query=A.query,(null!==B.pathname||null!==B.search)&&(B.path=(B.pathname?B.pathname:"")+(B.search?B.search:"")),B.href=B.format(),B}if(!K.length)return(/*
- * no path at all. easy.
- * we've already handled the other stuff above.
- */B.pathname=null,B.search?B.path="/"+B.search:B.path=null,B.href=B.format(),B);for(var J=K.slice(-1)[0],a=(B.host||A.host||K.length>1)&&("."===J||".."===J)||""===J,H=0,S=K.length;S>=0;S--)"."===(J=K[S])?K.splice(S,1):".."===J?(K.splice(S,1),H++):H&&(K.splice(S,1),H--);// if the path is allowed to go above the root, restore leading ..s
-if(!R&&!y)for(;H--;H)K.unshift("..");R&&""!==K[0]&&(!K[0]||"/"!==K[0].charAt(0))&&K.unshift(""),a&&"/"!==K.join("/").substr(-1)&&K.push("");var t=""===K[0]||K[0]&&"/"===K[0].charAt(0);// put the host back
-if(L){B.hostname=t?"":K.length?K.shift():"",B.host=B.hostname;/*
- * occationaly the auth can get stuck only in host
- * this especially happens in cases like
- * url.resolveObject('mailto:local1@domain1', 'local2@domain2')
- */var c=!!(B.host&&B.host.indexOf("@")>0)&&B.host.split("@");c&&(B.auth=c.shift(),B.hostname=c.shift(),B.host=B.hostname)}return(R=R||B.host&&K.length)&&!t&&K.unshift(""),K.length>0?B.pathname=K.join("/"):(B.pathname=null,B.path=null),(null!==B.pathname||null!==B.search)&&(B.path=(B.pathname?B.pathname:"")+(B.search?B.search:"")),B.auth=A.auth||B.auth,B.slashes=B.slashes||A.slashes,B.href=B.format(),B},g.prototype.parseHost=function(){var A=this.host,Q=C.exec(A);Q&&(":"!==(Q=Q[0])&&(this.port=Q.substr(1)),A=A.substr(0,A.length-Q.length)),A&&(this.hostname=A)}}),N("hXHVm",function(A,Q){!function(B){/** Detect free variables */var g=Q&&!Q.nodeType&&Q,I=A&&!A.nodeType&&A,C="object"==typeof E&&E;(C.global===C||C.window===C||C.self===C)&&(B=C);/**
- * The `punycode` object.
- * @name punycode
- * @type Object
- */var D,/** Temporary variable */w,/** Regular expressions */o=/^xn--/,N=/[^\x20-\x7E]/,i=/[\x2E\u3002\uFF0E\uFF61]/g,/** Error messages */G={overflow:"Overflow: input needs wider integers to process","not-basic":"Illegal input >= 0x80 (not a basic code point)","invalid-input":"Invalid input"},Y=Math.floor,F=String.fromCharCode;/*--------------------------------------------------------------------------*//**
- * A generic error utility function.
- * @private
- * @param {String} type The error type.
- * @returns {Error} Throws a `RangeError` with the applicable error message.
- */function h(A){throw RangeError(G[A])}/**
- * A generic `Array#map` utility function.
- * @private
- * @param {Array} array The array to iterate over.
- * @param {Function} callback The function that gets called for every array
- * item.
- * @returns {Array} A new array of values returned by the callback function.
- */function M(A,Q){for(var B=A.length,g=[];B--;)g[B]=Q(A[B]);return g}/**
- * A simple `Array#map`-like wrapper to work with domain name strings or email
- * addresses.
- * @private
- * @param {String} domain The domain name or email address.
- * @param {Function} callback The function that gets called for every
- * character.
- * @returns {Array} A new string of characters returned by the callback
- * function.
- */function U(A,Q){var B=A.split("@"),g="";return B.length>1&&(// In email addresses, only the domain name should be punycoded. Leave
-// the local part (i.e. everything up to `@`) intact.
-g=B[0]+"@",A=B[1]),g+M(// Avoid `split(regex)` for IE8 compatibility. See #17.
-(A=A.replace(i,".")).split("."),Q).join(".")}/**
- * Creates an array containing the numeric code points of each Unicode
- * character in the string. While JavaScript uses UCS-2 internally,
- * this function will convert a pair of surrogate halves (each of which
- * UCS-2 exposes as separate characters) into a single code point,
- * matching UTF-16.
- * @see `punycode.ucs2.encode`
- * @see
- * @memberOf punycode.ucs2
- * @name decode
- * @param {String} string The Unicode input string (UCS-2).
- * @returns {Array} The new array of code points.
- */function s(A){for(var Q,B,g=[],I=0,C=A.length;I=55296&&Q<=56319&&I65535&&(A-=65536,Q+=F(A>>>10&1023|55296),A=56320|1023&A),Q+=F(A)}).join("")}/**
- * Converts a digit/integer into a basic code point.
- * @see `basicToDigit()`
- * @private
- * @param {Number} digit The numeric value of a basic code point.
- * @returns {Number} The basic code point whose value (when used for
- * representing integers) is `digit`, which needs to be in the range
- * `0` to `base - 1`. If `flag` is non-zero, the uppercase form is
- * used; else, the lowercase form is used. The behavior is undefined
- * if `flag` is non-zero and `digit` has no uppercase form.
- */function R(A,Q){// 0..25 map to ASCII a..z or A..Z
-// 26..35 map to ASCII 0..9
-return A+22+75*(A<26)-((0!=Q)<<5)}/**
- * Bias adaptation function as per section 3.4 of RFC 3492.
- * https://tools.ietf.org/html/rfc3492#section-3.4
- * @private
- */function y(A,Q,B){var g=0;for(A=B?Y(A/700):A>>1,A+=Y(A/Q);A>455;g+=36)A=Y(A/35);return Y(g+36*A/(A+38))}/**
- * Converts a Punycode string of ASCII-only symbols to a string of Unicode
- * symbols.
- * @memberOf punycode
- * @param {String} input The Punycode string of ASCII-only symbols.
- * @returns {String} The resulting string of Unicode symbols.
- */function K(A){// Don't use UCS-2
-var Q,B,g,I,C,E,D,w,o,N,/** Cached calculation results */i,G=[],F=A.length,M=0,U=128,s=72;for(// Handle the basic code points: let `basic` be the number of input code
-// points before the last delimiter, or `0` if there is none, then copy
-// the first basic code points to the output.
-(g=A.lastIndexOf("-"))<0&&(g=0),I=0;I=128&&h("not-basic"),G.push(A.charCodeAt(I));// Main decoding loop: start just after the last delimiter if any basic code
-// points were copied; start at the beginning otherwise.
-for(C=g>0?g+1:0;C=F&&h("invalid-input"),((o=(Q=A.charCodeAt(C++))-48<10?Q-22:Q-65<26?Q-65:Q-97<26?Q-97:36)>=36||o>Y((2147483647-M)/D))&&h("overflow"),M+=o*D,!(o<(N=w<=s?1:w>=s+26?26:w-s));w+=36)D>Y(2147483647/(i=36-N))&&h("overflow"),D*=i;s=y(M-E,B=G.length+1,0==E),Y(M/B)>2147483647-U&&h("overflow"),U+=Y(M/B),M%=B,// Insert `n` at position `i` of the output
-G.splice(M++,0,U)}return k(G)}/**
- * Converts a string of Unicode symbols (e.g. a domain name label) to a
- * Punycode string of ASCII-only symbols.
- * @memberOf punycode
- * @param {String} input The string of Unicode symbols.
- * @returns {String} The resulting Punycode string of ASCII-only symbols.
- */function L(A){var Q,B,g,I,C,E,D,w,o,N,i,/** `inputLength` will hold the number of code points in `input`. */G,/** Cached calculation results */M,U,k,K=[];// Handle the basic code points
-for(E=0,// Cache the length
-G=// Convert the input in UCS-2 to Unicode
-(A=s(A)).length,// Initialize the state
-Q=128,B=0,C=72;E=Q&&iY((2147483647-B)/// Increase `delta` enough to advance the decoder's state to ,
-// but guard against overflow
-(M=g+1))&&h("overflow"),B+=(D-Q)*M,Q=D,E=0;E2147483647&&h("overflow"),i==Q){// Represent delta as a generalized variable-length integer
-for(w=B,o=36;!(w<(N=o<=C?1:o>=C+26?26:o-C));o+=36)k=w-N,U=36-N,K.push(F(R(N+k%U,0))),w=Y(k/U);K.push(F(R(w,0))),C=y(B,M,g==I),B=0,++g}++B,++Q}return K.join("")}/** Expose `punycode` */// Some AMD build optimizers, like r.js, check for specific condition patterns
-// like the following:
-if(/*--------------------------------------------------------------------------*//** Define the public API */D={/**
- * A string representing the current Punycode.js version number.
- * @memberOf punycode
- * @type String
- */version:"1.4.1",/**
- * An object of methods to convert from JavaScript's internal character
- * representation (UCS-2) to Unicode code points, and back.
- * @see
- * @memberOf punycode
- * @type Object
- */ucs2:{decode:s,encode:k},decode:K,encode:L,toASCII:/**
- * Converts a Unicode string representing a domain name or an email address to
- * Punycode. Only the non-ASCII parts of the domain name will be converted,
- * i.e. it doesn't matter if you call it with a domain that's already in
- * ASCII.
- * @memberOf punycode
- * @param {String} input The domain name or email address to convert, as a
- * Unicode string.
- * @returns {String} The Punycode representation of the given domain name or
- * email address.
- */function(A){return U(A,function(A){return N.test(A)?"xn--"+L(A):A})},toUnicode:/**
- * Converts a Punycode string representing a domain name or an email address
- * to Unicode. Only the Punycoded parts of the input will be converted, i.e.
- * it doesn't matter if you call it on a string that has already been
- * converted to Unicode.
- * @memberOf punycode
- * @param {String} input The Punycoded domain name or email address to
- * convert to Unicode.
- * @returns {String} The Unicode representation of the given Punycode
- * string.
- */function(A){return U(A,function(A){return o.test(A)?K(A.slice(4).toLowerCase()):A})}},"function"==typeof define&&"object"==typeof define.amd&&define.amd)define("punycode",function(){return D});else if(g&&I){if(A.exports==g)I.exports=D;else for(w in D)D.hasOwnProperty(w)&&(g[w]=D[w])}else B.punycode=D}(this)}),N("vur5G",function(A,Q){var B=o("5dUJ0"),g=o("737Px"),I=o("f9IXi");A.exports={formats:I,parse:g,stringify:B}}),N("5dUJ0",function(A,Q){var B=o("j8MG6"),g=o("6yski"),I=o("f9IXi"),C=Object.prototype.hasOwnProperty,E={brackets:function(A){return A+"[]"},comma:"comma",indices:function(A,Q){return A+"["+Q+"]"},repeat:function(A){return A}},D=Array.isArray,w=Array.prototype.push,N=function(A,Q){w.apply(A,D(Q)?Q:[Q])},i=Date.prototype.toISOString,G=I.default,Y={addQueryPrefix:!1,allowDots:!1,charset:"utf-8",charsetSentinel:!1,delimiter:"&",encode:!0,encoder:g.encode,encodeValuesOnly:!1,format:G,formatter:I.formatters[G],// deprecated
-indices:!1,serializeDate:function(A){return i.call(A)},skipNulls:!1,strictNullHandling:!1},F={},h=function A(Q,I,C,E,w,o,i,G,h,M,U,s,k,R,y,K){for(var L,c,J=Q,a=K,H=0,S=!1;void 0!==(a=a.get(F))&&!S;){// Where object last appeared in the ref tree
-var t=a.get(Q);if(H+=1,void 0!==t){if(t===H)throw RangeError("Cyclic object value");// Break while
-S=!0}void 0===a.get(F)&&(H=0)}if("function"==typeof G?J=G(I,J):J instanceof Date?J=U(J):"comma"===C&&D(J)&&(J=g.maybeMap(J,function(A){return A instanceof Date?U(A):A})),null===J){if(w)return i&&!R?i(I,Y.encoder,y,"key",s):I;J=""}if("string"==typeof(L=J)||"number"==typeof L||"boolean"==typeof L||"symbol"==typeof L||"bigint"==typeof L||g.isBuffer(J))return i?[k(R?I:i(I,Y.encoder,y,"key",s))+"="+k(i(J,Y.encoder,y,"value",s))]:[k(I)+"="+k(String(J))];var O=[];if(void 0===J)return O;if("comma"===C&&D(J))R&&i&&(J=g.maybeMap(J,i)),c=[{value:J.length>0?J.join(",")||null:void 0}];else if(D(G))c=G;else{var f=Object.keys(J);c=h?f.sort(h):f}for(var e=E&&D(J)&&1===J.length?I+"[]":I,r=0;r0?k+s:""}}),N("j8MG6",function(A,Q){var B=o("d1Huc"),g=o("iB4YM"),I=o("1jt5E"),C=B("%TypeError%"),E=B("%WeakMap%",!0),D=B("%Map%",!0),w=g("WeakMap.prototype.get",!0),N=g("WeakMap.prototype.set",!0),i=g("WeakMap.prototype.has",!0),G=g("Map.prototype.get",!0),Y=g("Map.prototype.set",!0),F=g("Map.prototype.has",!0),h=function(A,Q){for(var B,g=A;null!==(B=g.next);g=B)if(B.key===Q)return g.next=B.next,B.next=A.next,A.next=B,B},M=function(A,Q){var B=h(A,Q);return B&&B.value},U=function(A,Q,B){var g=h(A,Q);g?g.value=B:A.next={key:Q,next:A.next,value:B}};A.exports=function(){var A,Q,B,g={assert:function(A){if(!g.has(A))throw new C("Side channel does not contain "+I(A))},get:function(g){if(E&&g&&("object"==typeof g||"function"==typeof g)){if(A)return w(A,g)}else if(D){if(Q)return G(Q,g)}else if(B)return M(B,g)},has:function(g){if(E&&g&&("object"==typeof g||"function"==typeof g)){if(A)return i(A,g)}else if(D){if(Q)return F(Q,g)}else if(B)return!!h(B,g);return!1},set:function(g,I){E&&g&&("object"==typeof g||"function"==typeof g)?(A||(A=new E),N(A,g,I)):D?(Q||(Q=new D),Y(Q,g,I)):(B||/*
- * Initialize the linked list as an empty node, so that we don't have
- * to special-case handling of the first node: we can always refer to
- * it as (previous node).next, instead of something like (list).head
- */(B={key:{},next:null}),U(B,g,I))}};return g}}),N("d1Huc",function(A,Q){var B=SyntaxError,g=Function,I=TypeError,C=function(A){try{return g('"use strict"; return ('+A+").constructor;")()}catch(A){}},E=Object.getOwnPropertyDescriptor;if(E)try{E({},"")}catch(A){E=null;// this is IE 8, which has a broken gOPD
-}var D=function(){throw new I},w=E?function(){try{return(// eslint-disable-next-line no-unused-expressions, no-caller, no-restricted-properties
-arguments.callee,D)}catch(A){try{// IE 8 throws on Object.getOwnPropertyDescriptor(arguments, '')
-return E(arguments,"callee").get}catch(A){return D}}}():D,N=o("445q1")(),i=o("f9ecA")(),G=Object.getPrototypeOf||(i?function(A){return A.__proto__}// eslint-disable-line no-proto
-:null),Y={},F="undefined"!=typeof Uint8Array&&G?G(Uint8Array):void 0,h={"%AggregateError%":"undefined"==typeof AggregateError?void 0:AggregateError,"%Array%":Array,"%ArrayBuffer%":"undefined"==typeof ArrayBuffer?void 0:ArrayBuffer,"%ArrayIteratorPrototype%":N&&G?G([][Symbol.iterator]()):void 0,"%AsyncFromSyncIteratorPrototype%":void 0,"%AsyncFunction%":Y,"%AsyncGenerator%":Y,"%AsyncGeneratorFunction%":Y,"%AsyncIteratorPrototype%":Y,"%Atomics%":"undefined"==typeof Atomics?void 0:Atomics,"%BigInt%":"undefined"==typeof BigInt?void 0:BigInt,"%BigInt64Array%":"undefined"==typeof BigInt64Array?void 0:BigInt64Array,"%BigUint64Array%":"undefined"==typeof BigUint64Array?void 0:BigUint64Array,"%Boolean%":Boolean,"%DataView%":"undefined"==typeof DataView?void 0:DataView,"%Date%":Date,"%decodeURI%":decodeURI,"%decodeURIComponent%":decodeURIComponent,"%encodeURI%":encodeURI,"%encodeURIComponent%":encodeURIComponent,"%Error%":Error,"%eval%":eval,"%EvalError%":EvalError,"%Float32Array%":"undefined"==typeof Float32Array?void 0:Float32Array,"%Float64Array%":"undefined"==typeof Float64Array?void 0:Float64Array,"%FinalizationRegistry%":"undefined"==typeof FinalizationRegistry?void 0:FinalizationRegistry,"%Function%":g,"%GeneratorFunction%":Y,"%Int8Array%":"undefined"==typeof Int8Array?void 0:Int8Array,"%Int16Array%":"undefined"==typeof Int16Array?void 0:Int16Array,"%Int32Array%":"undefined"==typeof Int32Array?void 0:Int32Array,"%isFinite%":isFinite,"%isNaN%":isNaN,"%IteratorPrototype%":N&&G?G(G([][Symbol.iterator]())):void 0,"%JSON%":"object"==typeof JSON?JSON:void 0,"%Map%":"undefined"==typeof Map?void 0:Map,"%MapIteratorPrototype%":"undefined"!=typeof Map&&N&&G?G(new Map()[Symbol.iterator]()):void 0,"%Math%":Math,"%Number%":Number,"%Object%":Object,"%parseFloat%":parseFloat,"%parseInt%":parseInt,"%Promise%":"undefined"==typeof Promise?void 0:Promise,"%Proxy%":"undefined"==typeof Proxy?void 0:Proxy,"%RangeError%":RangeError,"%ReferenceError%":ReferenceError,"%Reflect%":"undefined"==typeof Reflect?void 0:Reflect,"%RegExp%":RegExp,"%Set%":"undefined"==typeof Set?void 0:Set,"%SetIteratorPrototype%":"undefined"!=typeof Set&&N&&G?G(new Set()[Symbol.iterator]()):void 0,"%SharedArrayBuffer%":"undefined"==typeof SharedArrayBuffer?void 0:SharedArrayBuffer,"%String%":String,"%StringIteratorPrototype%":N&&G?G(""[Symbol.iterator]()):void 0,"%Symbol%":N?Symbol:void 0,"%SyntaxError%":B,"%ThrowTypeError%":w,"%TypedArray%":F,"%TypeError%":I,"%Uint8Array%":"undefined"==typeof Uint8Array?void 0:Uint8Array,"%Uint8ClampedArray%":"undefined"==typeof Uint8ClampedArray?void 0:Uint8ClampedArray,"%Uint16Array%":"undefined"==typeof Uint16Array?void 0:Uint16Array,"%Uint32Array%":"undefined"==typeof Uint32Array?void 0:Uint32Array,"%URIError%":URIError,"%WeakMap%":"undefined"==typeof WeakMap?void 0:WeakMap,"%WeakRef%":"undefined"==typeof WeakRef?void 0:WeakRef,"%WeakSet%":"undefined"==typeof WeakSet?void 0:WeakSet};if(G)try{null.error;// eslint-disable-line no-unused-expressions
-}catch(A){// https://github.com/tc39/proposal-shadowrealm/pull/384#issuecomment-1364264229
-var M=G(G(A));h["%Error.prototype%"]=M}var U=function A(Q){var B;if("%AsyncFunction%"===Q)B=C("async function () {}");else if("%GeneratorFunction%"===Q)B=C("function* () {}");else if("%AsyncGeneratorFunction%"===Q)B=C("async function* () {}");else if("%AsyncGenerator%"===Q){var g=A("%AsyncGeneratorFunction%");g&&(B=g.prototype)}else if("%AsyncIteratorPrototype%"===Q){var I=A("%AsyncGenerator%");I&&G&&(B=G(I.prototype))}return h[Q]=B,B},s={"%ArrayBufferPrototype%":["ArrayBuffer","prototype"],"%ArrayPrototype%":["Array","prototype"],"%ArrayProto_entries%":["Array","prototype","entries"],"%ArrayProto_forEach%":["Array","prototype","forEach"],"%ArrayProto_keys%":["Array","prototype","keys"],"%ArrayProto_values%":["Array","prototype","values"],"%AsyncFunctionPrototype%":["AsyncFunction","prototype"],"%AsyncGenerator%":["AsyncGeneratorFunction","prototype"],"%AsyncGeneratorPrototype%":["AsyncGeneratorFunction","prototype","prototype"],"%BooleanPrototype%":["Boolean","prototype"],"%DataViewPrototype%":["DataView","prototype"],"%DatePrototype%":["Date","prototype"],"%ErrorPrototype%":["Error","prototype"],"%EvalErrorPrototype%":["EvalError","prototype"],"%Float32ArrayPrototype%":["Float32Array","prototype"],"%Float64ArrayPrototype%":["Float64Array","prototype"],"%FunctionPrototype%":["Function","prototype"],"%Generator%":["GeneratorFunction","prototype"],"%GeneratorPrototype%":["GeneratorFunction","prototype","prototype"],"%Int8ArrayPrototype%":["Int8Array","prototype"],"%Int16ArrayPrototype%":["Int16Array","prototype"],"%Int32ArrayPrototype%":["Int32Array","prototype"],"%JSONParse%":["JSON","parse"],"%JSONStringify%":["JSON","stringify"],"%MapPrototype%":["Map","prototype"],"%NumberPrototype%":["Number","prototype"],"%ObjectPrototype%":["Object","prototype"],"%ObjProto_toString%":["Object","prototype","toString"],"%ObjProto_valueOf%":["Object","prototype","valueOf"],"%PromisePrototype%":["Promise","prototype"],"%PromiseProto_then%":["Promise","prototype","then"],"%Promise_all%":["Promise","all"],"%Promise_reject%":["Promise","reject"],"%Promise_resolve%":["Promise","resolve"],"%RangeErrorPrototype%":["RangeError","prototype"],"%ReferenceErrorPrototype%":["ReferenceError","prototype"],"%RegExpPrototype%":["RegExp","prototype"],"%SetPrototype%":["Set","prototype"],"%SharedArrayBufferPrototype%":["SharedArrayBuffer","prototype"],"%StringPrototype%":["String","prototype"],"%SymbolPrototype%":["Symbol","prototype"],"%SyntaxErrorPrototype%":["SyntaxError","prototype"],"%TypedArrayPrototype%":["TypedArray","prototype"],"%TypeErrorPrototype%":["TypeError","prototype"],"%Uint8ArrayPrototype%":["Uint8Array","prototype"],"%Uint8ClampedArrayPrototype%":["Uint8ClampedArray","prototype"],"%Uint16ArrayPrototype%":["Uint16Array","prototype"],"%Uint32ArrayPrototype%":["Uint32Array","prototype"],"%URIErrorPrototype%":["URIError","prototype"],"%WeakMapPrototype%":["WeakMap","prototype"],"%WeakSetPrototype%":["WeakSet","prototype"]},k=o("hHbXD"),R=o("coYQo"),y=k.call(Function.call,Array.prototype.concat),K=k.call(Function.apply,Array.prototype.splice),L=k.call(Function.call,String.prototype.replace),c=k.call(Function.call,String.prototype.slice),J=k.call(Function.call,RegExp.prototype.exec),a=/[^%.[\]]+|\[(?:(-?\d+(?:\.\d+)?)|(["'])((?:(?!\2)[^\\]|\\.)*?)\2)\]|(?=(?:\.|\[\])(?:\.|\[\]|%$))/g,H=/\\(\\)?/g,S=function(A){var Q=c(A,0,1),g=c(A,-1);if("%"===Q&&"%"!==g)throw new B("invalid intrinsic syntax, expected closing `%`");if("%"===g&&"%"!==Q)throw new B("invalid intrinsic syntax, expected opening `%`");var I=[];return L(A,a,function(A,Q,B,g){I[I.length]=B?L(g,H,"$1"):Q||A}),I},t=function(A,Q){var g,C=A;if(R(s,C)&&(C="%"+(g=s[C])[0]+"%"),R(h,C)){var E=h[C];if(E===Y&&(E=U(C)),void 0===E&&!Q)throw new I("intrinsic "+A+" exists, but is not available. Please file an issue!");return{alias:g,name:C,value:E}}throw new B("intrinsic "+A+" does not exist!")};A.exports=function(A,Q){if("string"!=typeof A||0===A.length)throw new I("intrinsic name must be a non-empty string");if(arguments.length>1&&"boolean"!=typeof Q)throw new I('"allowMissing" argument must be a boolean');if(null===J(/^%?[^%]*%?$/,A))throw new B("`%` may not be present anywhere but at the beginning and end of the intrinsic name");var g=S(A),C=g.length>0?g[0]:"",D=t("%"+C+"%",Q),w=D.name,o=D.value,N=!1,i=D.alias;i&&(C=i[0],K(g,y([0,1],i)));for(var G=1,Y=!0;G=g.length){var s=E(o,F);// By convention, when a data property is converted to an accessor
-// property to emulate a data property that does not suffer from
-// the override mistake, that accessor's getter is marked with
-// an `originalValue` property. Here, when we detect this, we
-// uphold the illusion by pretending to see that original data
-// property, i.e., returning the value rather than the getter
-// itself.
-o=(Y=!!s)&&"get"in s&&!("originalValue"in s.get)?s.get:o[F]}else Y=R(o,F),o=o[F];Y&&!N&&(h[w]=o)}}return o}}),N("445q1",function(A,Q){var B="undefined"!=typeof Symbol&&Symbol,g=o("dvu0f");A.exports=function(){return"function"==typeof B&&"function"==typeof Symbol&&"symbol"==typeof B("foo")&&"symbol"==typeof Symbol("bar")&&g()}}),N("dvu0f",function(A,Q){/* eslint complexity: [2, 18], max-statements: [2, 33] */A.exports=function(){if("function"!=typeof Symbol||"function"!=typeof Object.getOwnPropertySymbols)return!1;if("symbol"==typeof Symbol.iterator)return!0;var A={},Q=Symbol("test"),B=Object(Q);if("string"==typeof Q||"[object Symbol]"!==Object.prototype.toString.call(Q)||"[object Symbol]"!==Object.prototype.toString.call(B))return!1;for(Q in A[Q]=42,A)return!1;// eslint-disable-line no-restricted-syntax, no-unreachable-loop
-if("function"==typeof Object.keys&&0!==Object.keys(A).length||"function"==typeof Object.getOwnPropertyNames&&0!==Object.getOwnPropertyNames(A).length)return!1;var g=Object.getOwnPropertySymbols(A);if(1!==g.length||g[0]!==Q||!Object.prototype.propertyIsEnumerable.call(A,Q))return!1;if("function"==typeof Object.getOwnPropertyDescriptor){var I=Object.getOwnPropertyDescriptor(A,Q);if(42!==I.value||!0!==I.enumerable)return!1}return!0}}),N("f9ecA",function(A,Q){var B={foo:{}},g=Object;A.exports=function(){return({__proto__:B}).foo===B.foo&&!(({__proto__:null})instanceof g)}}),N("hHbXD",function(A,Q){var B=o("12Kd9");A.exports=Function.prototype.bind||B}),N("12Kd9",function(A,Q){var B=Object.prototype.toString,g=Math.max,I=function(A,Q){for(var B=[],g=0;g-1?g(C):C}}),N("azNMp",function(A,Q){var B=o("hHbXD"),g=o("d1Huc"),I=g("%Function.prototype.apply%"),C=g("%Function.prototype.call%"),E=g("%Reflect.apply%",!0)||B.call(C,I),D=g("%Object.getOwnPropertyDescriptor%",!0),w=g("%Object.defineProperty%",!0),N=g("%Math.max%");if(w)try{w({},"a",{value:1})}catch(A){// IE 8 has a broken defineProperty
-w=null}A.exports=function(A){var Q=E(B,C,arguments);return D&&w&&D(Q,"length").configurable&&w(Q,"length",{value:1+N(0,A.length-(arguments.length-1))}),Q};var i=function(){return E(B,I,arguments)};w?w(A.exports,"apply",{value:i}):A.exports.apply=i}),N("1jt5E",function(A,Q){var B="function"==typeof Map&&Map.prototype,g=Object.getOwnPropertyDescriptor&&B?Object.getOwnPropertyDescriptor(Map.prototype,"size"):null,I=B&&g&&"function"==typeof g.get?g.get:null,C=B&&Map.prototype.forEach,E="function"==typeof Set&&Set.prototype,D=Object.getOwnPropertyDescriptor&&E?Object.getOwnPropertyDescriptor(Set.prototype,"size"):null,w=E&&D&&"function"==typeof D.get?D.get:null,N=E&&Set.prototype.forEach,i="function"==typeof WeakMap&&WeakMap.prototype?WeakMap.prototype.has:null,G="function"==typeof WeakSet&&WeakSet.prototype?WeakSet.prototype.has:null,Y="function"==typeof WeakRef&&WeakRef.prototype?WeakRef.prototype.deref:null,F=Boolean.prototype.valueOf,h=Object.prototype.toString,M=Function.prototype.toString,U=String.prototype.match,s=String.prototype.slice,k=String.prototype.replace,R=String.prototype.toUpperCase,y=String.prototype.toLowerCase,K=RegExp.prototype.test,L=Array.prototype.concat,c=Array.prototype.join,J=Array.prototype.slice,a=Math.floor,H="function"==typeof BigInt?BigInt.prototype.valueOf:null,S=Object.getOwnPropertySymbols,t="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?Symbol.prototype.toString:null,O="function"==typeof Symbol&&"object"==typeof Symbol.iterator,f="function"==typeof Symbol&&Symbol.toStringTag&&(typeof Symbol.toStringTag===O?"object":"symbol")?Symbol.toStringTag:null,e=Object.prototype.propertyIsEnumerable,r=("function"==typeof Reflect?Reflect.getPrototypeOf:Object.getPrototypeOf)||([].__proto__===Array.prototype// eslint-disable-line no-proto
-?function(A){return A.__proto__;// eslint-disable-line no-proto
-}:null);function Z(A,Q){if(A===1/0||A===-1/0||A!=A||A&&A>-1e3&&A<1e3||K.call(/e/,Q))return Q;var B=/[0-9](?=(?:[0-9]{3})+(?![0-9]))/g;if("number"==typeof A){var g=A<0?-a(-A):a(A);// trunc(num)
-if(g!==A){var I=String(g),C=s.call(Q,I.length+1);return k.call(I,B,"$&_")+"."+k.call(k.call(C,/([0-9]{3})/g,"$&_"),/_$/,"")}}return k.call(Q,B,"$&_")}var n=o("9EZPc"),m=n.custom,j=z(m)?m:null;function q(A,Q,B){var g="double"===(B.quoteStyle||Q)?'"':"'";return g+A+g}function p(A){return"[object Array]"===P(A)&&(!f||!("object"==typeof A&&f in A))}function d(A){return"[object RegExp]"===P(A)&&(!f||!("object"==typeof A&&f in A))}// Symbol and BigInt do have Symbol.toStringTag by spec, so that can't be used to eliminate false positives
-function z(A){if(O)return A&&"object"==typeof A&&A instanceof Symbol;if("symbol"==typeof A)return!0;if(!A||"object"!=typeof A||!t)return!1;try{return t.call(A),!0}catch(A){}return!1}A.exports=function A(Q,B,g,E){var D=B||{};if(T(D,"quoteStyle")&&"single"!==D.quoteStyle&&"double"!==D.quoteStyle)throw TypeError('option "quoteStyle" must be "single" or "double"');if(T(D,"maxStringLength")&&("number"==typeof D.maxStringLength?D.maxStringLength<0&&D.maxStringLength!==1/0:null!==D.maxStringLength))throw TypeError('option "maxStringLength", if provided, must be a positive integer, Infinity, or `null`');var o=!T(D,"customInspect")||D.customInspect;if("boolean"!=typeof o&&"symbol"!==o)throw TypeError("option \"customInspect\", if provided, must be `true`, `false`, or `'symbol'`");if(T(D,"indent")&&null!==D.indent&&" "!==D.indent&&!(parseInt(D.indent,10)===D.indent&&D.indent>0))throw TypeError('option "indent" must be "\\t", an integer > 0, or `null`');if(T(D,"numericSeparator")&&"boolean"!=typeof D.numericSeparator)throw TypeError('option "numericSeparator", if provided, must be `true` or `false`');var h=D.numericSeparator;if(void 0===Q)return"undefined";if(null===Q)return"null";if("boolean"==typeof Q)return Q?"true":"false";if("string"==typeof Q)return function A(Q,B){if(Q.length>B.maxStringLength){var g=Q.length-B.maxStringLength;return A(s.call(Q,0,B.maxStringLength),B)+"... "+g+" more character"+(g>1?"s":"")}return q(k.call(k.call(Q,/(['\\])/g,"\\$1"),/[\x00-\x1f]/g,W),"single",B)}(Q,D);if("number"==typeof Q){if(0===Q)return 1/0/Q>0?"0":"-0";var R=String(Q);return h?Z(Q,R):R}if("bigint"==typeof Q){var K=String(Q)+"n";return h?Z(Q,K):K}var a=void 0===D.depth?5:D.depth;if(void 0===g&&(g=0),g>=a&&a>0&&"object"==typeof Q)return p(Q)?"[Array]":"[Object]";var S=function(A,Q){var B;if(" "===A.indent)B=" ";else{if("number"!=typeof A.indent||!(A.indent>0))return null;B=c.call(Array(A.indent+1)," ")}return{base:B,prev:c.call(Array(Q+1),B)}}(D,g);if(void 0===E)E=[];else if(l(E,Q)>=0)return"[Circular]";function m(Q,B,I){if(B&&(E=J.call(E)).push(B),I){var C={depth:D.depth};return T(D,"quoteStyle")&&(C.quoteStyle=D.quoteStyle),A(Q,C,g+1,E)}return A(Q,D,g+1,E)}if("function"==typeof Q&&!d(Q)){var x=function(A){if(A.name)return A.name;var Q=U.call(M.call(A),/^function\s*([\w$]+)/);return Q?Q[1]:null}(Q),_=X(Q,m);return"[Function"+(x?": "+x:" (anonymous)")+"]"+(_.length>0?" { "+c.call(_,", ")+" }":"")}if(z(Q)){var $=O?k.call(String(Q),/^(Symbol\(.*\))_[^)]*$/,"$1"):t.call(Q);return"object"!=typeof Q||O?$:u($)}if(Q&&"object"==typeof Q&&("undefined"!=typeof HTMLElement&&Q instanceof HTMLElement||"string"==typeof Q.nodeName&&"function"==typeof Q.getAttribute)){for(var AA,AQ="<"+y.call(String(Q.nodeName)),AB=Q.attributes||[],Ag=0;Ag",Q.childNodes&&Q.childNodes.length&&(AQ+="..."),AQ+=""+y.call(String(Q.nodeName))+">"}if(p(Q)){if(0===Q.length)return"[]";var AI=X(Q,m);return S&&!function(A){for(var Q=0;Q=0)return!1;return!0}(AI)?"["+v(AI,S)+"]":"[ "+c.call(AI,", ")+" ]"}if("[object Error]"===P(Q)&&(!f||!("object"==typeof Q&&f in Q))){var AC=X(Q,m);return"cause"in Error.prototype||!("cause"in Q)||e.call(Q,"cause")?0===AC.length?"["+String(Q)+"]":"{ ["+String(Q)+"] "+c.call(AC,", ")+" }":"{ ["+String(Q)+"] "+c.call(L.call("[cause]: "+m(Q.cause),AC),", ")+" }"}if("object"==typeof Q&&o){if(j&&"function"==typeof Q[j]&&n)return n(Q,{depth:a-g});if("symbol"!==o&&"function"==typeof Q.inspect)return Q.inspect()}if(function(A){if(!I||!A||"object"!=typeof A)return!1;try{I.call(A);try{w.call(A)}catch(A){return!0}return A instanceof Map;// core-js workaround, pre-v2.5.0
-}catch(A){}return!1}(Q)){var AE=[];return C&&C.call(Q,function(A,B){AE.push(m(B,Q,!0)+" => "+m(A,Q))}),V("Map",I.call(Q),AE,S)}if(function(A){if(!w||!A||"object"!=typeof A)return!1;try{w.call(A);try{I.call(A)}catch(A){return!0}return A instanceof Set;// core-js workaround, pre-v2.5.0
-}catch(A){}return!1}(Q)){var AD=[];return N&&N.call(Q,function(A){AD.push(m(A,Q))}),V("Set",w.call(Q),AD,S)}if(function(A){if(!i||!A||"object"!=typeof A)return!1;try{i.call(A,i);try{G.call(A,G)}catch(A){return!0}return A instanceof WeakMap;// core-js workaround, pre-v2.5.0
-}catch(A){}return!1}(Q))return b("WeakMap");if(function(A){if(!G||!A||"object"!=typeof A)return!1;try{G.call(A,G);try{i.call(A,i)}catch(A){return!0}return A instanceof WeakSet;// core-js workaround, pre-v2.5.0
-}catch(A){}return!1}(Q))return b("WeakSet");if(function(A){if(!Y||!A||"object"!=typeof A)return!1;try{return Y.call(A),!0}catch(A){}return!1}(Q))return b("WeakRef");if("[object Number]"===P(Q)&&(!f||!("object"==typeof Q&&f in Q)))return u(m(Number(Q)));if(function(A){if(!A||"object"!=typeof A||!H)return!1;try{return H.call(A),!0}catch(A){}return!1}(Q))return u(m(H.call(Q)));if("[object Boolean]"===P(Q)&&(!f||!("object"==typeof Q&&f in Q)))return u(F.call(Q));if("[object String]"===P(Q)&&(!f||!("object"==typeof Q&&f in Q)))return u(m(String(Q)));if(!("[object Date]"===P(Q)&&(!f||!("object"==typeof Q&&f in Q)))&&!d(Q)){var Aw=X(Q,m),Ao=r?r(Q)===Object.prototype:Q instanceof Object||Q.constructor===Object,AN=Q instanceof Object?"":"null prototype",Ai=!Ao&&f&&Object(Q)===Q&&f in Q?s.call(P(Q),8,-1):AN?"Object":"",AG=(Ao||"function"!=typeof Q.constructor?"":Q.constructor.name?Q.constructor.name+" ":"")+(Ai||AN?"["+c.call(L.call([],Ai||[],AN||[]),": ")+"] ":"");return 0===Aw.length?AG+"{}":S?AG+"{"+v(Aw,S)+"}":AG+"{ "+c.call(Aw,", ")+" }"}return String(Q)};var x=Object.prototype.hasOwnProperty||function(A){return A in this};function T(A,Q){return x.call(A,Q)}function P(A){return h.call(A)}function l(A,Q){if(A.indexOf)return A.indexOf(Q);for(var B=0,g=A.length;B1;){var Q=A.pop(),B=Q.obj[Q.prop];if(I(B)){for(var g=[],C=0;C=48&&N<=57// 0-9
-||N>=65&&N<=90// a-z
-||N>=97&&N<=122// A-Z
-||E===B.RFC1738&&(40===N||41// ( )
-===N)){w+=D.charAt(o);continue}if(N<128){w+=C[N];continue}if(N<2048){w+=C[192|N>>6]+C[128|63&N];continue}if(N<55296||N>=57344){w+=C[224|N>>12]+C[128|N>>6&63]+C[128|63&N];continue}o+=1,/* eslint operator-linebreak: [2, "before"] */w+=C[240|(N=65536+((1023&N)<<10|1023&D.charCodeAt(o)))>>18]+C[128|N>>12&63]+C[128|N>>6&63]+C[128|63&N]}return w},isBuffer:function(A){return!!A&&"object"==typeof A&&!!(A.constructor&&A.constructor.isBuffer&&A.constructor.isBuffer(A))},isRegExp:function(A){return"[object RegExp]"===Object.prototype.toString.call(A)},maybeMap:function(A,Q){if(I(A)){for(var B=[],g=0;g-1?A.split(","):A},D=function(A,Q){var D={__proto__:null},w=Q.ignoreQueryPrefix?A.replace(/^\?/,""):A,o=Q.parameterLimit===1/0?void 0:Q.parameterLimit,N=w.split(Q.delimiter,o),i=-1,G=Q.charset;if(Q.charsetSentinel)for(Y=0;Y-1&&(h=I(h)?[h]:h),g.call(D,F)?D[F]=B.combine(D[F],h):D[F]=h}return D},w=function(A,Q,B,g){for(var I=g?Q:E(Q,B),C=A.length-1;C>=0;--C){var D,w=A[C];if("[]"===w&&B.parseArrays)D=[].concat(I);else{D=B.plainObjects?Object.create(null):{};var o="["===w.charAt(0)&&"]"===w.charAt(w.length-1)?w.slice(1,-1):w,N=parseInt(o,10);B.parseArrays||""!==o?!isNaN(N)&&w!==o&&String(N)===o&&N>=0&&B.parseArrays&&N<=B.arrayLimit?(D=[])[N]=I:"__proto__"!==o&&(D[o]=I):D={0:I}}I=D}return I},N=function(A,Q,B,I){if(A){// Transform dot notation to bracket notation
-var C=B.allowDots?A.replace(/\.([^.[]+)/g,"[$1]"):A,E=/(\[[^[\]]*])/g,D=B.depth>0&&/(\[[^[\]]*])/.exec(C),o=D?C.slice(0,D.index):C,N=[];if(o){// If we aren't using plain objects, optionally prefix keys that would overwrite object prototype properties
-if(!B.plainObjects&&g.call(Object.prototype,o)&&!B.allowPrototypes)return;N.push(o)}for(// Loop through children appending to the array until we hit depth
-var i=0;B.depth>0&&null!==(D=E.exec(C))&&i1)for(var B=1;B>18&63]+J[g>>12&63]+J[g>>6&63]+J[63&g]);return I.join("")}(A,C,C+16383>E?E:C+16383));return 1===g?I.push(J[(Q=A[B-1])>>2]+J[Q<<4&63]+"=="):2===g&&I.push(J[(Q=(A[B-2]<<8)+A[B-1])>>10]+J[Q>>4&63]+J[Q<<2&63]+"="),I.join("")};for(var J=[],a=[],H="undefined"!=typeof Uint8Array?Uint8Array:Array,S="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",t=0,O=S.length;t>1,N=-7,i=B?I-1:0,G=B?-1:1,Y=A[Q+i];for(i+=G,C=Y&(1<<-N)-1,Y>>=-N,N+=D;N>0;C=256*C+A[Q+i],i+=G,N-=8);for(E=C&(1<<-N)-1,C>>=-N,N+=g;N>0;E=256*E+A[Q+i],i+=G,N-=8);if(0===C)C=1-o;else{if(C===w)return E?NaN:(Y?-1:1)*(1/0);E+=Math.pow(2,g),C-=o}return(Y?-1:1)*E*Math.pow(2,C-g)},C=function(A,Q,B,g,I,C){var E,D,w,o=8*C-I-1,N=(1<>1,G=23===I?5960464477539062e-23:0,Y=g?0:C-1,F=g?1:-1,h=Q<0||0===Q&&1/Q<0?1:0;for(isNaN(Q=Math.abs(Q))||Q===1/0?(D=isNaN(Q)?1:0,E=N):(E=Math.floor(Math.log(Q)/Math.LN2),Q*(w=Math.pow(2,-E))<1&&(E--,w*=2),E+i>=1?Q+=G/w:Q+=G*Math.pow(2,1-i),Q*w>=2&&(E++,w/=2),E+i>=N?(D=0,E=N):E+i>=1?(D=(Q*w-1)*Math.pow(2,I),E+=i):(D=Q*Math.pow(2,i-1)*Math.pow(2,I),E=0));I>=8;A[B+Y]=255&D,Y+=F,D/=256,I-=8);for(E=E<0;A[B+Y]=255&E,Y+=F,E/=256,o-=8);A[B+Y-F]|=128*h};let f="function"==typeof Symbol&&"function"// eslint-disable-line dot-notation
-==typeof Symbol.for?Symbol.for("nodejs.util.inspect.custom")// eslint-disable-line dot-notation
-:null;function e(A){if(A>2147483647)throw RangeError('The value "'+A+'" is invalid for option "size"');// Return an augmented `Uint8Array` instance
-let Q=new Uint8Array(A);return Object.setPrototypeOf(Q,r.prototype),Q}/**
- * The Buffer constructor returns instances of `Uint8Array` that have their
- * prototype changed to `Buffer.prototype`. Furthermore, `Buffer` is a subclass of
- * `Uint8Array`, so the returned instances will have all the node `Buffer` methods
- * and the `Uint8Array` methods. Square bracket notation works as expected -- it
- * returns a single octet.
- *
- * The `Uint8Array` prototype remains unmodified.
- */function r(A,Q,B){// Common case.
-if("number"==typeof A){if("string"==typeof Q)throw TypeError('The "string" argument must be of type string. Received type number');return m(A)}return Z(A,Q,B)}function Z(A,Q,B){if("string"==typeof A)return function(A,Q){if(("string"!=typeof Q||""===Q)&&(Q="utf8"),!r.isEncoding(Q))throw TypeError("Unknown encoding: "+Q);let B=0|d(A,Q),g=e(B),I=g.write(A,Q);return I!==B&&// cause everything after the first invalid character to be ignored. (e.g.
-// 'abxxcd' will be treated as 'ab')
-(g=g.slice(0,I)),g}(A,Q);if(ArrayBuffer.isView(A))return function(A){if(Ao(A,Uint8Array)){let Q=new Uint8Array(A);return q(Q.buffer,Q.byteOffset,Q.byteLength)}return j(A)}(A);if(null==A)throw TypeError("The first argument must be one of type string, Buffer, ArrayBuffer, Array, or Array-like Object. Received type "+typeof A);if(Ao(A,ArrayBuffer)||A&&Ao(A.buffer,ArrayBuffer)||"undefined"!=typeof SharedArrayBuffer&&(Ao(A,SharedArrayBuffer)||A&&Ao(A.buffer,SharedArrayBuffer)))return q(A,Q,B);if("number"==typeof A)throw TypeError('The "value" argument must not be of type number. Received type number');let g=A.valueOf&&A.valueOf();if(null!=g&&g!==A)return r.from(g,Q,B);let I=function(A){var Q;if(r.isBuffer(A)){let Q=0|p(A.length),B=e(Q);return 0===B.length||A.copy(B,0,0,Q),B}return void 0!==A.length?"number"!=typeof A.length||(Q=A.length)!=Q// eslint-disable-line no-self-compare
-?e(0):j(A):"Buffer"===A.type&&Array.isArray(A.data)?j(A.data):void 0}(A);if(I)return I;if("undefined"!=typeof Symbol&&null!=Symbol.toPrimitive&&"function"==typeof A[Symbol.toPrimitive])return r.from(A[Symbol.toPrimitive]("string"),Q,B);throw TypeError("The first argument must be one of type string, Buffer, ArrayBuffer, Array, or Array-like Object. Received type "+typeof A)}function n(A){if("number"!=typeof A)throw TypeError('"size" argument must be of type number');if(A<0)throw RangeError('The value "'+A+'" is invalid for option "size"')}function m(A){return n(A),e(A<0?0:0|p(A))}function j(A){let Q=A.length<0?0:0|p(A.length),B=e(Q);for(let g=0;g=2147483647)throw RangeError("Attempt to allocate Buffer larger than maximum size: 0x7fffffff bytes");return 0|A}function d(A,Q){if(r.isBuffer(A))return A.length;if(ArrayBuffer.isView(A)||Ao(A,ArrayBuffer))return A.byteLength;if("string"!=typeof A)throw TypeError('The "string" argument must be one of type string, Buffer, or ArrayBuffer. Received type '+typeof A);let B=A.length,g=arguments.length>2&&!0===arguments[2];if(!g&&0===B)return 0;// Use a for loop to avoid recursion
-let I=!1;for(;;)switch(Q){case"ascii":case"latin1":case"binary":return B;case"utf8":case"utf-8":return AE(A).length;case"ucs2":case"ucs-2":case"utf16le":case"utf-16le":return 2*B;case"hex":return B>>>1;case"base64":return AD(A).length;default:if(I)return g?-1:AE(A).length// assume utf8
-;Q=(""+Q).toLowerCase(),I=!0}}function z(A,Q,B){let I=!1;// Return early if start > this.length. Done here to prevent potential uint32
-// coercion fail below.
-if((void 0===Q||Q<0)&&(Q=0),Q>this.length||((void 0===B||B>this.length)&&(B=this.length),B<=0||// Force coercion to uint32. This will also coerce falsey/NaN values to 0.
-(B>>>=0)<=(Q>>>=0)))return"";for(A||(A="utf8");;)switch(A){case"hex":return function(A,Q,B){let g=A.length;(!Q||Q<0)&&(Q=0),(!B||B<0||B>g)&&(B=g);let I="";for(let g=Q;g= `byteOffset`,
-// OR the last index of `val` in `buffer` at offset <= `byteOffset`.
-//
-// Arguments:
-// - buffer - a Buffer to search
-// - val - a string, Buffer, or number
-// - byteOffset - an index into `buffer`; will be clamped to an int32
-// - encoding - an optional encoding, relevant is val is a string
-// - dir - true for indexOf, false for lastIndexOf
-function T(A,Q,B,g,I){var C;// Empty buffer means no match
-if(0===A.length)return -1;if("string"==typeof B?(g=B,B=0):B>2147483647?B=2147483647:B<-2147483648&&(B=-2147483648),(C=B=+B// Coerce to Number.
-)!=C&&(B=I?0:A.length-1),B<0&&(B=A.length+B),B>=A.length){if(I)return -1;B=A.length-1}else if(B<0){if(!I)return -1;B=0}// Finally, search either indexOf (if dir is true) or lastIndexOf
-if("string"==typeof Q&&(Q=r.from(Q,g)),r.isBuffer(Q))return(// Special case: looking for empty string/buffer always fails
-0===Q.length?-1:P(A,Q,B,g,I));if("number"==typeof Q)return(Q&=255// Search for a byte value [0-255]
-,"function"==typeof Uint8Array.prototype.indexOf)?I?Uint8Array.prototype.indexOf.call(A,Q,B):Uint8Array.prototype.lastIndexOf.call(A,Q,B):P(A,[Q],B,g,I);throw TypeError("val must be string, number or Buffer")}function P(A,Q,B,g,I){let C,E=1,D=A.length,w=Q.length;if(void 0!==g&&("ucs2"===(g=String(g).toLowerCase())||"ucs-2"===g||"utf16le"===g||"utf-16le"===g)){if(A.length<2||Q.length<2)return -1;E=2,D/=2,w/=2,B/=2}function o(A,Q){return 1===E?A[Q]:A.readUInt16BE(Q*E)}if(I){let g=-1;for(C=B;CD&&(B=D-w),C=B;C>=0;C--){let B=!0;for(let g=0;g239?4:Q>223?3:Q>191?2:1;if(I+E<=B){let B,g,D,w;switch(E){case 1:Q<128&&(C=Q);break;case 2:(192&(B=A[I+1]))==128&&(w=(31&Q)<<6|63&B)>127&&(C=w);break;case 3:B=A[I+1],g=A[I+2],(192&B)==128&&(192&g)==128&&(w=(15&Q)<<12|(63&B)<<6|63&g)>2047&&(w<55296||w>57343)&&(C=w);break;case 4:B=A[I+1],g=A[I+2],D=A[I+3],(192&B)==128&&(192&g)==128&&(192&D)==128&&(w=(15&Q)<<18|(63&B)<<12|(63&g)<<6|63&D)>65535&&w<1114112&&(C=w)}}null===C?(// we did not generate a valid codePoint so insert a
-// replacement char (U+FFFD) and advance only 1 byte
-C=65533,E=1):C>65535&&(// encode to utf16 (surrogate pair dance)
-C-=65536,g.push(C>>>10&1023|55296),C=56320|1023&C),g.push(C),I+=E}return function(A){let Q=A.length;if(Q<=4096)return String.fromCharCode.apply(String,A)// avoid extra slice()
-;// Decode in chunks to avoid "call stack size exceeded".
-let B="",g=0;for(;gB)throw RangeError("Trying to access beyond buffer length")}function u(A,Q,B,g,I,C){if(!r.isBuffer(A))throw TypeError('"buffer" argument must be a Buffer instance');if(Q>I||QA.length)throw RangeError("Index out of range")}function b(A,Q,B,g,I){AB(Q,g,I,A,B,7);let C=Number(Q&BigInt(4294967295));A[B++]=C,C>>=8,A[B++]=C,C>>=8,A[B++]=C,C>>=8,A[B++]=C;let E=Number(Q>>BigInt(32)&BigInt(4294967295));return A[B++]=E,E>>=8,A[B++]=E,E>>=8,A[B++]=E,E>>=8,A[B++]=E,B}function V(A,Q,B,g,I){AB(Q,g,I,A,B,7);let C=Number(Q&BigInt(4294967295));A[B+7]=C,C>>=8,A[B+6]=C,C>>=8,A[B+5]=C,C>>=8,A[B+4]=C;let E=Number(Q>>BigInt(32)&BigInt(4294967295));return A[B+3]=E,E>>=8,A[B+2]=E,E>>=8,A[B+1]=E,E>>=8,A[B]=E,B+8}function v(A,Q,B,g,I,C){if(B+g>A.length||B<0)throw RangeError("Index out of range")}function X(A,Q,B,g,I){return Q=+Q,B>>>=0,I||v(A,Q,B,4,34028234663852886e22,-34028234663852886e22),C(A,Q,B,g,23,4),B+4}function _(A,Q,B,g,I){return Q=+Q,B>>>=0,I||v(A,Q,B,8,17976931348623157e292,-17976931348623157e292),C(A,Q,B,g,52,8),B+8}/**
- * If `Buffer.TYPED_ARRAY_SUPPORT`:
- * === true Use Uint8Array implementation (fastest)
- * === false Print warning and recommend using `buffer` v4.x which has an Object
- * implementation (most compatible, even IE6)
- *
- * Browsers that support typed arrays are IE 10+, Firefox 4+, Chrome 7+, Safari 5.1+,
- * Opera 11.6+, iOS 4.2+.
- *
- * We report that the browser does not support typed arrays if the are not subclassable
- * using __proto__. Firefox 4-29 lacks support for adding new properties to `Uint8Array`
- * (See: https://bugzilla.mozilla.org/show_bug.cgi?id=695438). IE 10 lacks support
- * for __proto__ and has a buggy typed array implementation.
- */r.TYPED_ARRAY_SUPPORT=function(){// Can typed array instances can be augmented?
-try{let A=new Uint8Array(1),Q={foo:function(){return 42}};return Object.setPrototypeOf(Q,Uint8Array.prototype),Object.setPrototypeOf(A,Q),42===A.foo()}catch(A){return!1}}(),r.TYPED_ARRAY_SUPPORT||"undefined"==typeof console||"function"!=typeof console.error||console.error("This browser lacks typed array (Uint8Array) support which is required by `buffer` v5.x. Use `buffer` v4.x if you require old browser support."),Object.defineProperty(r.prototype,"parent",{enumerable:!0,get:function(){if(r.isBuffer(this))return this.buffer}}),Object.defineProperty(r.prototype,"offset",{enumerable:!0,get:function(){if(r.isBuffer(this))return this.byteOffset}}),r.poolSize=8192// not used by this implementation
-,/**
- * Functionally equivalent to Buffer(arg, encoding) but throws a TypeError
- * if value is a number.
- * Buffer.from(str[, encoding])
- * Buffer.from(array)
- * Buffer.from(buffer)
- * Buffer.from(arrayBuffer[, byteOffset[, length]])
- **/r.from=function(A,Q,B){return Z(A,Q,B)},// Note: Change prototype *after* Buffer.from is defined to workaround Chrome bug:
-// https://github.com/feross/buffer/pull/148
-Object.setPrototypeOf(r.prototype,Uint8Array.prototype),Object.setPrototypeOf(r,Uint8Array),/**
- * Creates a new filled Buffer instance.
- * alloc(size[, fill[, encoding]])
- **/r.alloc=function(A,Q,B){return(n(A),A<=0)?e(A):void 0!==Q?"string"==typeof B?e(A).fill(Q,B):e(A).fill(Q):e(A)},/**
- * Equivalent to Buffer(num), by default creates a non-zero-filled Buffer instance.
- * */r.allocUnsafe=function(A){return m(A)},/**
- * Equivalent to SlowBuffer(num), by default creates a non-zero-filled Buffer instance.
- */r.allocUnsafeSlow=function(A){return m(A)},r.isBuffer=function(A){return null!=A&&!0===A._isBuffer&&A!==r.prototype// so Buffer.isBuffer(Buffer.prototype) will be false
-},r.compare=function(A,Q){if(Ao(A,Uint8Array)&&(A=r.from(A,A.offset,A.byteLength)),Ao(Q,Uint8Array)&&(Q=r.from(Q,Q.offset,Q.byteLength)),!r.isBuffer(A)||!r.isBuffer(Q))throw TypeError('The "buf1", "buf2" arguments must be one of type Buffer or Uint8Array');if(A===Q)return 0;let B=A.length,g=Q.length;for(let I=0,C=Math.min(B,g);Ig.length?(r.isBuffer(Q)||(Q=r.from(Q)),Q.copy(g,I)):Uint8Array.prototype.set.call(g,Q,I);else if(r.isBuffer(Q))Q.copy(g,I);else throw TypeError('"list" argument must be an Array of Buffers');I+=Q.length}return g},r.byteLength=d,// This property is used by `Buffer.isBuffer` (and the `is-buffer` npm package)
-// to detect a Buffer instance. It's not possible to use `instanceof Buffer`
-// reliably in a browserify context because there could be multiple different
-// copies of the 'buffer' package in use. This method works even for Buffer
-// instances that were created from another copy of the `buffer` package.
-// See: https://github.com/feross/buffer/issues/154
-r.prototype._isBuffer=!0,r.prototype.swap16=function(){let A=this.length;if(A%2!=0)throw RangeError("Buffer size must be a multiple of 16-bits");for(let Q=0;Q50&&(A+=" ... "),""},f&&(r.prototype[f]=r.prototype.inspect),r.prototype.compare=function(A,Q,B,g,I){if(Ao(A,Uint8Array)&&(A=r.from(A,A.offset,A.byteLength)),!r.isBuffer(A))throw TypeError('The "target" argument must be one of type Buffer or Uint8Array. Received type '+typeof A);if(void 0===Q&&(Q=0),void 0===B&&(B=A?A.length:0),void 0===g&&(g=0),void 0===I&&(I=this.length),Q<0||B>A.length||g<0||I>this.length)throw RangeError("out of range index");if(g>=I&&Q>=B)return 0;if(g>=I)return -1;if(Q>=B)return 1;if(Q>>>=0,B>>>=0,g>>>=0,I>>>=0,this===A)return 0;let C=I-g,E=B-Q,D=Math.min(C,E),w=this.slice(g,I),o=A.slice(Q,B);for(let A=0;A>>=0,isFinite(B)?(B>>>=0,void 0===g&&(g="utf8")):(g=B,B=void 0);else throw Error("Buffer.write(string, encoding, offset[, length]) is no longer supported");let G=this.length-Q;if((void 0===B||B>G)&&(B=G),A.length>0&&(B<0||Q<0)||Q>this.length)throw RangeError("Attempt to write outside buffer bounds");g||(g="utf8");let Y=!1;for(;;)switch(g){case"hex":return function(A,Q,B,g){let I;B=Number(B)||0;let C=A.length-B;g?(g=Number(g))>C&&(g=C):g=C;let E=Q.length;for(g>E/2&&(g=E/2),I=0;I>8,I.push(B%256),I.push(g);return I}(A,this.length-N),this,N,i);default:if(Y)throw TypeError("Unknown encoding: "+g);g=(""+g).toLowerCase(),Y=!0}},r.prototype.toJSON=function(){return{type:"Buffer",data:Array.prototype.slice.call(this._arr||this,0)}},r.prototype.slice=function(A,Q){let B=this.length;A=~~A,Q=void 0===Q?B:~~Q,A<0?(A+=B)<0&&(A=0):A>B&&(A=B),Q<0?(Q+=B)<0&&(Q=0):Q>B&&(Q=B),Q>>=0,Q>>>=0,B||W(A,Q,this.length);let g=this[A],I=1,C=0;for(;++C>>=0,Q>>>=0,B||W(A,Q,this.length);let g=this[A+--Q],I=1;for(;Q>0&&(I*=256);)g+=this[A+--Q]*I;return g},r.prototype.readUint8=r.prototype.readUInt8=function(A,Q){return A>>>=0,Q||W(A,1,this.length),this[A]},r.prototype.readUint16LE=r.prototype.readUInt16LE=function(A,Q){return A>>>=0,Q||W(A,2,this.length),this[A]|this[A+1]<<8},r.prototype.readUint16BE=r.prototype.readUInt16BE=function(A,Q){return A>>>=0,Q||W(A,2,this.length),this[A]<<8|this[A+1]},r.prototype.readUint32LE=r.prototype.readUInt32LE=function(A,Q){return A>>>=0,Q||W(A,4,this.length),(this[A]|this[A+1]<<8|this[A+2]<<16)+16777216*this[A+3]},r.prototype.readUint32BE=r.prototype.readUInt32BE=function(A,Q){return A>>>=0,Q||W(A,4,this.length),16777216*this[A]+(this[A+1]<<16|this[A+2]<<8|this[A+3])},r.prototype.readBigUInt64LE=Ai(function(A){Ag(A>>>=0,"offset");let Q=this[A],B=this[A+7];(void 0===Q||void 0===B)&&AI(A,this.length-8);let g=Q+256*this[++A]+65536*this[++A]+16777216*this[++A],I=this[++A]+256*this[++A]+65536*this[++A]+16777216*B;return BigInt(g)+(BigInt(I)<>>=0,"offset");let Q=this[A],B=this[A+7];(void 0===Q||void 0===B)&&AI(A,this.length-8);let g=16777216*Q+65536*this[++A]+256*this[++A]+this[++A],I=16777216*this[++A]+65536*this[++A]+256*this[++A]+B;return(BigInt(g)<>>=0,Q>>>=0,B||W(A,Q,this.length);let g=this[A],I=1,C=0;for(;++C=(I*=128)&&(g-=Math.pow(2,8*Q)),g},r.prototype.readIntBE=function(A,Q,B){A>>>=0,Q>>>=0,B||W(A,Q,this.length);let g=Q,I=1,C=this[A+--g];for(;g>0&&(I*=256);)C+=this[A+--g]*I;return C>=(I*=128)&&(C-=Math.pow(2,8*Q)),C},r.prototype.readInt8=function(A,Q){return(A>>>=0,Q||W(A,1,this.length),128&this[A])?-((255-this[A]+1)*1):this[A]},r.prototype.readInt16LE=function(A,Q){A>>>=0,Q||W(A,2,this.length);let B=this[A]|this[A+1]<<8;return 32768&B?4294901760|B:B},r.prototype.readInt16BE=function(A,Q){A>>>=0,Q||W(A,2,this.length);let B=this[A+1]|this[A]<<8;return 32768&B?4294901760|B:B},r.prototype.readInt32LE=function(A,Q){return A>>>=0,Q||W(A,4,this.length),this[A]|this[A+1]<<8|this[A+2]<<16|this[A+3]<<24},r.prototype.readInt32BE=function(A,Q){return A>>>=0,Q||W(A,4,this.length),this[A]<<24|this[A+1]<<16|this[A+2]<<8|this[A+3]},r.prototype.readBigInt64LE=Ai(function(A){Ag(A>>>=0,"offset");let Q=this[A],B=this[A+7];(void 0===Q||void 0===B)&&AI(A,this.length-8);let g=this[A+4]+256*this[A+5]+65536*this[A+6]+(B<<24// Overflow
-);return(BigInt(g)<>>=0,"offset");let Q=this[A],B=this[A+7];(void 0===Q||void 0===B)&&AI(A,this.length-8);let g=(Q<<24)+// Overflow
-65536*this[++A]+256*this[++A]+this[++A];return(BigInt(g)<>>=0,Q||W(A,4,this.length),I(this,A,!0,23,4)},r.prototype.readFloatBE=function(A,Q){return A>>>=0,Q||W(A,4,this.length),I(this,A,!1,23,4)},r.prototype.readDoubleLE=function(A,Q){return A>>>=0,Q||W(A,8,this.length),I(this,A,!0,52,8)},r.prototype.readDoubleBE=function(A,Q){return A>>>=0,Q||W(A,8,this.length),I(this,A,!1,52,8)},r.prototype.writeUintLE=r.prototype.writeUIntLE=function(A,Q,B,g){if(A=+A,Q>>>=0,B>>>=0,!g){let g=Math.pow(2,8*B)-1;u(this,A,Q,B,g,0)}let I=1,C=0;for(this[Q]=255&A;++C>>=0,B>>>=0,!g){let g=Math.pow(2,8*B)-1;u(this,A,Q,B,g,0)}let I=B-1,C=1;for(this[Q+I]=255&A;--I>=0&&(C*=256);)this[Q+I]=A/C&255;return Q+B},r.prototype.writeUint8=r.prototype.writeUInt8=function(A,Q,B){return A=+A,Q>>>=0,B||u(this,A,Q,1,255,0),this[Q]=255&A,Q+1},r.prototype.writeUint16LE=r.prototype.writeUInt16LE=function(A,Q,B){return A=+A,Q>>>=0,B||u(this,A,Q,2,65535,0),this[Q]=255&A,this[Q+1]=A>>>8,Q+2},r.prototype.writeUint16BE=r.prototype.writeUInt16BE=function(A,Q,B){return A=+A,Q>>>=0,B||u(this,A,Q,2,65535,0),this[Q]=A>>>8,this[Q+1]=255&A,Q+2},r.prototype.writeUint32LE=r.prototype.writeUInt32LE=function(A,Q,B){return A=+A,Q>>>=0,B||u(this,A,Q,4,4294967295,0),this[Q+3]=A>>>24,this[Q+2]=A>>>16,this[Q+1]=A>>>8,this[Q]=255&A,Q+4},r.prototype.writeUint32BE=r.prototype.writeUInt32BE=function(A,Q,B){return A=+A,Q>>>=0,B||u(this,A,Q,4,4294967295,0),this[Q]=A>>>24,this[Q+1]=A>>>16,this[Q+2]=A>>>8,this[Q+3]=255&A,Q+4},r.prototype.writeBigUInt64LE=Ai(function(A,Q=0){return b(this,A,Q,BigInt(0),BigInt("0xffffffffffffffff"))}),r.prototype.writeBigUInt64BE=Ai(function(A,Q=0){return V(this,A,Q,BigInt(0),BigInt("0xffffffffffffffff"))}),r.prototype.writeIntLE=function(A,Q,B,g){if(A=+A,Q>>>=0,!g){let g=Math.pow(2,8*B-1);u(this,A,Q,B,g-1,-g)}let I=0,C=1,E=0;for(this[Q]=255&A;++I>0)-E&255;return Q+B},r.prototype.writeIntBE=function(A,Q,B,g){if(A=+A,Q>>>=0,!g){let g=Math.pow(2,8*B-1);u(this,A,Q,B,g-1,-g)}let I=B-1,C=1,E=0;for(this[Q+I]=255&A;--I>=0&&(C*=256);)A<0&&0===E&&0!==this[Q+I+1]&&(E=1),this[Q+I]=(A/C>>0)-E&255;return Q+B},r.prototype.writeInt8=function(A,Q,B){return A=+A,Q>>>=0,B||u(this,A,Q,1,127,-128),A<0&&(A=255+A+1),this[Q]=255&A,Q+1},r.prototype.writeInt16LE=function(A,Q,B){return A=+A,Q>>>=0,B||u(this,A,Q,2,32767,-32768),this[Q]=255&A,this[Q+1]=A>>>8,Q+2},r.prototype.writeInt16BE=function(A,Q,B){return A=+A,Q>>>=0,B||u(this,A,Q,2,32767,-32768),this[Q]=A>>>8,this[Q+1]=255&A,Q+2},r.prototype.writeInt32LE=function(A,Q,B){return A=+A,Q>>>=0,B||u(this,A,Q,4,2147483647,-2147483648),this[Q]=255&A,this[Q+1]=A>>>8,this[Q+2]=A>>>16,this[Q+3]=A>>>24,Q+4},r.prototype.writeInt32BE=function(A,Q,B){return A=+A,Q>>>=0,B||u(this,A,Q,4,2147483647,-2147483648),A<0&&(A=4294967295+A+1),this[Q]=A>>>24,this[Q+1]=A>>>16,this[Q+2]=A>>>8,this[Q+3]=255&A,Q+4},r.prototype.writeBigInt64LE=Ai(function(A,Q=0){return b(this,A,Q,-BigInt("0x8000000000000000"),BigInt("0x7fffffffffffffff"))}),r.prototype.writeBigInt64BE=Ai(function(A,Q=0){return V(this,A,Q,-BigInt("0x8000000000000000"),BigInt("0x7fffffffffffffff"))}),r.prototype.writeFloatLE=function(A,Q,B){return X(this,A,Q,!0,B)},r.prototype.writeFloatBE=function(A,Q,B){return X(this,A,Q,!1,B)},r.prototype.writeDoubleLE=function(A,Q,B){return _(this,A,Q,!0,B)},r.prototype.writeDoubleBE=function(A,Q,B){return _(this,A,Q,!1,B)},// copy(targetBuffer, targetStart=0, sourceStart=0, sourceEnd=buffer.length)
-r.prototype.copy=function(A,Q,B,g){if(!r.isBuffer(A))throw TypeError("argument should be a Buffer");// Copy 0 bytes; we're done
-if(B||(B=0),g||0===g||(g=this.length),Q>=A.length&&(Q=A.length),Q||(Q=0),g>0&&g=this.length)throw RangeError("Index out of range");if(g<0)throw RangeError("sourceEnd out of bounds");g>this.length&&(g=this.length),A.length-Q>>=0,B=void 0===B?this.length:B>>>0,A||(A=0),"number"==typeof A)for(I=Q;I=g+4;B-=3)Q=`_${A.slice(B-3,B)}${Q}`;return`${A.slice(0,B)}${Q}`}function AB(A,Q,B,g,I,C){if(A>B||A3?0===Q||Q===BigInt(0)?`>= 0${I} and < 2${I} ** ${(C+1)*8}${I}`:`>= -(2${I} ** ${(C+1)*8-1}${I}) and < 2 ** ${(C+1)*8-1}${I}`:`>= ${Q}${I} and <= ${B}${I}`,new $.ERR_OUT_OF_RANGE("value",g,A)}Ag(I,"offset"),(void 0===g[I]||void 0===g[I+C])&&AI(I,g.length-(C+1))}function Ag(A,Q){if("number"!=typeof A)throw new $.ERR_INVALID_ARG_TYPE(Q,"number",A)}function AI(A,Q,B){if(Math.floor(A)!==A)throw Ag(A,B),new $.ERR_OUT_OF_RANGE(B||"offset","an integer",A);if(Q<0)throw new $.ERR_BUFFER_OUT_OF_BOUNDS;throw new $.ERR_OUT_OF_RANGE(B||"offset",`>= ${B?1:0} and <= ${Q}`,A)}AA("ERR_BUFFER_OUT_OF_BOUNDS",function(A){return A?`${A} is outside of buffer bounds`:"Attempt to access memory outside buffer bounds"},RangeError),AA("ERR_INVALID_ARG_TYPE",function(A,Q){return`The "${A}" argument must be of type number. Received type ${typeof Q}`},TypeError),AA("ERR_OUT_OF_RANGE",function(A,Q,B){let g=`The value of "${A}" is out of range.`,I=B;return Number.isInteger(B)&&Math.abs(B)>4294967296?I=AQ(String(B)):"bigint"==typeof B&&(I=String(B),(B>BigInt(2)**BigInt(32)||B<-(BigInt(2)**BigInt(32)))&&(I=AQ(I)),I+="n"),g+=` It must be ${Q}. Received ${I}`},RangeError);// HELPER FUNCTIONS
-// ================
-let AC=/[^+/0-9A-Za-z-_]/g;function AE(A,Q){let B;Q=Q||1/0;let g=A.length,I=null,C=[];for(let E=0;E55295&&B<57344){// last char was a lead
-if(!I){// no lead yet
-if(B>56319||E+1===g){// unexpected trail
-(Q-=3)>-1&&C.push(239,191,189);continue}// valid lead
-I=B;continue}// 2 leads in a row
-if(B<56320){(Q-=3)>-1&&C.push(239,191,189),I=B;continue}// valid surrogate pair
-B=(I-55296<<10|B-56320)+65536}else I&&(Q-=3)>-1&&C.push(239,191,189);// encode utf8
-if(I=null,B<128){if((Q-=1)<0)break;C.push(B)}else if(B<2048){if((Q-=2)<0)break;C.push(B>>6|192,63&B|128)}else if(B<65536){if((Q-=3)<0)break;C.push(B>>12|224,B>>6&63|128,63&B|128)}else if(B<1114112){if((Q-=4)<0)break;C.push(B>>18|240,B>>12&63|128,B>>6&63|128,63&B|128)}else throw Error("Invalid code point")}return C}function AD(A){return function(A){var Q,B,g=function(A){var Q=A.length;if(Q%4>0)throw Error("Invalid string. Length must be a multiple of 4");// Trim off extra bytes after placeholder bytes are found
-// See: https://github.com/beatgammit/base64-js/issues/42
-var B=A.indexOf("=");-1===B&&(B=Q);var g=B===Q?0:4-B%4;return[B,g]}(A),I=g[0],C=g[1],E=new H((I+C)*3/4-C),D=0,w=C>0?I-4:I;for(B=0;B>16&255,E[D++]=Q>>8&255,E[D++]=255&Q;return 2===C&&(Q=a[A.charCodeAt(B)]<<2|a[A.charCodeAt(B+1)]>>4,E[D++]=255&Q),1===C&&(Q=a[A.charCodeAt(B)]<<10|a[A.charCodeAt(B+1)]<<4|a[A.charCodeAt(B+2)]>>2,E[D++]=Q>>8&255,E[D++]=255&Q),E}(function(A){// Node converts strings with length < 2 to ''
-if(// Node strips out invalid characters like \n and \t from the string, base64-js does not
-(A=// Node takes equal signs as end of the Base64 encoding
-(A=A.split("=")[0]).trim().replace(AC,"")).length<2)return"";// Node allows for non-padded base64 strings (missing trailing ===), base64-js does not
-for(;A.length%4!=0;)A+="=";return A}(A))}function Aw(A,Q,B,g){let I;for(I=0;I=Q.length)&&!(I>=A.length);++I)Q[I+B]=A[I];return I}// ArrayBuffer or Uint8Array objects from other contexts (i.e. iframes) do not pass
-// the `instanceof` check but they should be treated as of that type.
-// See: https://github.com/feross/buffer/issues/166
-function Ao(A,Q){return A instanceof Q||null!=A&&null!=A.constructor&&null!=A.constructor.name&&A.constructor.name===Q.name}// Create lookup table for `toString('hex')`
-// See: https://github.com/feross/buffer/issues/219
-let AN=function(){let A="0123456789abcdef",Q=Array(256);for(let B=0;B<16;++B){let g=16*B;for(let I=0;I<16;++I)Q[g+I]=A[B]+A[I]}return Q}();// Return not function with Error if BigInt not supported
-function Ai(A){return"undefined"==typeof BigInt?AG:A}function AG(){throw Error("BigInt not supported")}/******************************************************************************
-Copyright (c) Microsoft Corporation.
-
-Permission to use, copy, modify, and/or distribute this software for any
-purpose with or without fee is hereby granted.
-
-THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
-REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
-AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
-INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
-LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
-OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
-PERFORMANCE OF THIS SOFTWARE.
-***************************************************************************** */function AY(A,Q,B,g){return new(B||(B=Promise))(function(I,C){function E(A){try{w(g.next(A))}catch(A){C(A)}}function D(A){try{w(g.throw(A))}catch(A){C(A)}}function w(A){var Q;A.done?I(A.value):((Q=A.value)instanceof B?Q:new B(function(A){A(Q)})).then(E,D)}w((g=g.apply(A,Q||[])).next())})}"function"==typeof SuppressedError&&SuppressedError,"undefined"!=typeof globalThis?globalThis:"undefined"!=typeof window?window:void 0!==E||"undefined"!=typeof self&&self;var AF={exports:{}},Ah=/*@__PURE__*/function(A){if(A.__esModule)return A;var Q=Object.defineProperty({},"__esModule",{value:!0});return Object.keys(A).forEach(function(B){var g=Object.getOwnPropertyDescriptor(A,B);Object.defineProperty(Q,B,g.get?g:{enumerable:!0,get:function(){return A[B]}})}),Q}(/*#__PURE__*/Object.freeze({__proto__:null,default:{}})),AM=function(){throw Error("ws does not work in the browser. Browser clients must use the native WebSocket object")};!function(A){var Q,B,g,I,C,E,D,w;/*! *****************************************************************************
- Copyright (c) Microsoft Corporation.
-
- Permission to use, copy, modify, and/or distribute this software for any
- purpose with or without fee is hereby granted.
-
- THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
- REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
- AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
- INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
- LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
- OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
- PERFORMANCE OF THIS SOFTWARE.
- ***************************************************************************** */function o(A,Q,B,g){return new(B||(B=Promise))(function(I,C){function E(A){try{w(g.next(A))}catch(A){C(A)}}function D(A){try{w(g.throw(A))}catch(A){C(A)}}function w(A){var Q;A.done?I(A.value):((Q=A.value)instanceof B?Q:new B(function(A){A(Q)})).then(E,D)}w((g=g.apply(A,Q||[])).next())})}/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied. See the License for the
- * specific language governing permissions and limitations
- * under the License.
- *//**
- * Convert string to Uint8array.
- * @param str The string.
- * @returns The corresponding Uint8Array.
- */function N(A){let Q=new Uint8Array(A.length+1);for(let B=0;B>0]}loadU16(A){return this.buffer!=this.memory.buffer&&this.updateViews(),this.viewU16[A>>1]}loadU32(A){return this.buffer!=this.memory.buffer&&this.updateViews(),this.viewU32[A>>2]}loadI32(A){return this.buffer!=this.memory.buffer&&this.updateViews(),this.viewI32[A>>2]}loadI64(A){return this.buffer!=this.memory.buffer&&this.updateViews(),this.viewI32[A>>2]}loadF32(A){return this.buffer!=this.memory.buffer&&this.updateViews(),this.viewF32[A>>2]}loadF64(A){return this.buffer!=this.memory.buffer&&this.updateViews(),this.viewF64[A>>3]}loadPointer(A){return(this.buffer!=this.memory.buffer&&this.updateViews(),this.wasm32)?this.loadU32(A):this.loadI64(A)}loadUSize(A){return(this.buffer!=this.memory.buffer&&this.updateViews(),this.wasm32)?this.loadU32(A):this.loadI64(A)}sizeofPtr(){return this.wasm32?4/* SizeOf.I32 */:8/* SizeOf.I64 */}/**
- * Load raw bytes from ptr.
- * @param ptr The head address
- * @param numBytes The number
- */loadRawBytes(A,Q){this.buffer!=this.memory.buffer&&this.updateViews();let B=new Uint8Array(Q);return B.set(this.viewU8.slice(A,A+Q)),B}/**
- * Load TVMByteArray from ptr.
- *
- * @param ptr The address of the header.
- */loadTVMBytes(A){let Q=this.loadPointer(A),B=this.loadUSize(A+this.sizeofPtr());return this.loadRawBytes(Q,B)}/**
- * Load null-terminated C-string from ptr.
- * @param ptr The head address
- */loadCString(A){this.buffer!=this.memory.buffer&&this.updateViews();// NOTE: the views are still valid for read.
-let Q=[],B=1;for(;0!=B;)0!=(B=this.viewU8[A])&&Q.push(String.fromCharCode(B)),++A;return Q.join("")}/**
- * Store raw bytes to the ptr.
- * @param ptr The head address.
- * @param bytes The bytes content.
- */storeRawBytes(A,Q){this.buffer!=this.memory.buffer&&this.updateViews(),this.viewU8.set(Q,A)}/**
- * Update memory view after the memory growth.
- */updateViews(){this.buffer=this.memory.buffer,this.viewU8=new Uint8Array(this.buffer),this.viewU16=new Uint16Array(this.buffer),this.viewI32=new Int32Array(this.buffer),this.viewU32=new Uint32Array(this.buffer),this.viewF32=new Float32Array(this.buffer),this.viewF64=new Float64Array(this.buffer)}}/**
- * Auxiliary call stack for the FFI calls.
- *
- * Lifecyle of a call stack.
- * - Calls into allocXX to allocate space, mixed with storeXXX to store data.
- * - Calls into ptrFromOffset, no further allocation(as ptrFromOffset can change),
- * can still call into storeXX
- * - Calls into commitToWasmMemory once.
- * - reset.
- */class U{constructor(A,Q,B){/** List of temporay arguments that can be disposed during reset. */this.tempArgs=[],this.stackTop=0,this.basePtr=0,this.addressToSetTargetValue=[],this.memory=A,this.cAllocSpace=Q,this.cFreeSpace=B,this.buffer=new ArrayBuffer(128),this.basePtr=this.cAllocSpace(128),this.viewU8=new Uint8Array(this.buffer),this.viewI32=new Int32Array(this.buffer),this.viewU32=new Uint32Array(this.buffer),this.viewF64=new Float64Array(this.buffer),this.updateViews()}dispose(){0!=this.basePtr&&(this.cFreeSpace(this.basePtr),this.basePtr=0)}/**
- * Rest the call stack so that it can be reused again.
- */reset(){for(this.stackTop=0,h(0===this.addressToSetTargetValue.length);0!=this.tempArgs.length;)this.tempArgs.pop().dispose()}/**
- * Commit all the cached data to WasmMemory.
- * This function can only be called once.
- * No further store function should be called.
- *
- * @param nbytes Number of bytes to be stored.
- */commitToWasmMemory(A=this.stackTop){// commit all pointer values.
-for(;0!=this.addressToSetTargetValue.length;){let[A,Q]=this.addressToSetTargetValue.pop();this.storePtr(A,this.ptrFromOffset(Q))}this.memory.storeRawBytes(this.basePtr,this.viewU8.slice(0,A))}/**
- * Allocate space by number of bytes
- * @param nbytes Number of bytes.
- * @note This function always allocate space that aligns to 64bit.
- */allocRawBytes(A){if(// always aligns to 64bit
-A=A+7>>3<<3,this.stackTop+A>this.buffer.byteLength){let Q=Math.max(2*this.buffer.byteLength,this.stackTop+A),B=this.viewU8;this.buffer=new ArrayBuffer(Q),this.updateViews(),this.viewU8.set(B),0!=this.basePtr&&this.cFreeSpace(this.basePtr),this.basePtr=this.cAllocSpace(Q)}let Q=this.stackTop;return this.stackTop+=A,Q}/**
- * Allocate space for pointers.
- * @param count Number of pointers.
- * @returns The allocated pointer array.
- */allocPtrArray(A){return this.allocRawBytes(this.memory.sizeofPtr()*A)}/**
- * Get the real pointer from offset values.
- * Note that the returned value becomes obsolete if alloc is called on the stack.
- * @param offset The allocated offset.
- */ptrFromOffset(A){return this.basePtr+A}// Store APIs
-storePtr(A,Q){this.memory.wasm32?this.storeU32(A,Q):this.storeI64(A,Q)}storeUSize(A,Q){this.memory.wasm32?this.storeU32(A,Q):this.storeI64(A,Q)}storeI32(A,Q){this.viewI32[A>>2]=Q}storeU32(A,Q){this.viewU32[A>>2]=Q}storeI64(A,Q){let B=A>>2;this.viewI32[B]=4294967295&Q,// sign extend
-this.viewI32[B+1]=Q<0?-1:0}storeF64(A,Q){this.viewF64[A>>3]=Q}storeRawBytes(A,Q){this.viewU8.set(Q,A)}/**
- * Allocate then set C-String pointer to the offset.
- * This function will call into allocBytes to allocate necessary data.
- * The address won't be set immediately(because the possible change of basePtr)
- * and will be filled when we commit the data.
- *
- * @param offset The offset to set ot data pointer.
- * @param data The string content.
- */allocThenSetArgString(A,Q){let B=this.allocRawBytes(Q.length+1);this.storeRawBytes(B,N(Q)),this.addressToSetTargetValue.push([A,B])}/**
- * Allocate then set the argument location with a TVMByteArray.
- * Allocate new temporary space for bytes.
- *
- * @param offset The offset to set ot data pointer.
- * @param data The string content.
- */allocThenSetArgBytes(A,Q){// Note: size of size_t equals sizeof ptr.
-let B=this.allocRawBytes(2*this.memory.sizeofPtr()),g=this.allocRawBytes(Q.length);this.storeRawBytes(g,Q),this.storeUSize(B+this.memory.sizeofPtr(),Q.length),this.addressToSetTargetValue.push([A,B]),this.addressToSetTargetValue.push([B,g])}/**
- * Update internal cache views.
- */updateViews(){this.viewU8=new Uint8Array(this.buffer),this.viewI32=new Int32Array(this.buffer),this.viewU32=new Uint32Array(this.buffer),this.viewF64=new Float64Array(this.buffer)}}/**
- * Environment to impelement most of the JS library functions.
- */class s{constructor(A={},Q=console.log){/**
- * Maintains a table of FTVMWasmPackedCFunc that the C part
- * can call via TVMWasmPackedCFunc.
- *
- * We maintain a separate table so that we can have un-limited amount
- * of functions that do not maps to the address space.
- */this.packedCFuncTable=[void 0],/**
- * Free table index that can be recycled.
- */this.packedCFuncTableFreeId=[],this.logger=Q,this.libProvider=A.wasmLibraryProvider&&A.wasmLibraryProvider.start&&void 0!==A.wasmLibraryProvider.imports?{imports:A.wasmLibraryProvider.imports,start:Q=>{A.wasmLibraryProvider.start(Q)}}:A.imports&&void 0!==A.start?A:A.wasiImport&&void 0!==A.start?{imports:{wasi_snapshot_preview1:A.wasiImport},start:Q=>{A.start(Q)}}:void 0,void 0!==this.libProvider?this.imports=this.libProvider.imports:this.imports=A,// update with more functions
-this.imports.env=this.environment(this.imports.env)}/** Mark the start of the instance. */start(A){void 0!==this.libProvider&&this.libProvider.start(A)}environment(A){return Object.assign({__cxa_thread_atexit:()=>{},// eslint-disable-next-line @typescript-eslint/no-unused-vars
-emscripten_notify_memory_growth:A=>{}},A,{TVMWasmPackedCFunc:(A,Q,B,g,I)=>{let C=this.packedCFuncTable[I];return h(void 0!==C),C(A,Q,B,g,I)},TVMWasmPackedCFuncFinalizer:A=>{this.packedCFuncTable[A]=void 0,this.packedCFuncTableFreeId.push(A)},__console_log:A=>{this.logger(A)}})}}/**
- * DetectGPU device in the environment.
- */function k(){return o(this,void 0,void 0,function*(){if("undefined"!=typeof navigator&&void 0!==navigator.gpu){let A=yield navigator.gpu.requestAdapter({powerPreference:"high-performance"});if(null==A)throw Error("Cannot find adapter that matches the request");let Q=A=>Math.ceil(A/1048576)+"MB";if(1073741824>A.limits.maxBufferSize)throw Error(`Cannot initialize runtime because of requested maxBufferSize exceeds limit. requested=${Q(1073741824)}, limit=${Q(A.limits.maxBufferSize)}. This error may be caused by an older version of the browser (e.g. Chrome 112). You can try to upgrade your browser to Chrome 113 or later.`);if(1073741824>A.limits.maxStorageBufferBindingSize)throw Error(`Cannot initialize runtime because of requested maxStorageBufferBindingSize exceeds limit. requested=${Q(1073741824)}, limit=${Q(A.limits.maxStorageBufferBindingSize)}. `);if(32768>A.limits.maxComputeWorkgroupStorageSize)throw Error(`Cannot initialize runtime because of requested maxComputeWorkgroupStorageSize exceeds limit. requested=32768, limit=${A.limits.maxComputeWorkgroupStorageSize}. `);let B=[];// Always require f16 if available
-A.features.has("shader-f16")&&B.push("shader-f16");let g=yield A.requestAdapterInfo(),I=yield A.requestDevice({requiredLimits:{maxBufferSize:1073741824,maxStorageBufferBindingSize:1073741824,maxComputeWorkgroupStorageSize:32768},requiredFeatures:B});return{adapter:A,adapterInfo:g,device:I}}})}let R=`
-@group(0) @binding(0) var my_sampler : sampler;
-@group(0) @binding(1) var my_texture : texture_2d;
-
-struct VertexOutput {
- @builtin(position) position : vec4,
- @location(0) uv : vec2,
-}
-
-@vertex
-fn vertex_main(@builtin(vertex_index) vidx : u32) -> VertexOutput {
- const pos = array(
- vec2( 1.0, 1.0),
- vec2( 1.0, -1.0),
- vec2(-1.0, -1.0),
- vec2( 1.0, 1.0),
- vec2(-1.0, -1.0),
- vec2(-1.0, 1.0),
- );
-
- const uv = array(
- vec2(1.0, 0.0),
- vec2(1.0, 1.0),
- vec2(0.0, 1.0),
- vec2(1.0, 0.0),
- vec2(0.0, 1.0),
- vec2(0.0, 0.0),
- );
-
- var output : VertexOutput;
- output.position = vec4(pos[vidx], 0.0, 1.0);
- output.uv = uv[vidx];
- return output;
-}
-
-@fragment
-fn fragment_main(@location(0) uv : vec2) -> @location(0) vec4 {
- return textureSample(my_texture, my_sampler, uv);
-}
-
-@fragment
-fn fragment_clear(@location(0) uv : vec2) -> @location(0) vec4 {
- return vec4(1.0, 1.0, 1.0, 1.0);
-}
-`;class y{constructor(A,Q){this.device=A;let B=Q.getContext("webgpu");if(null==B)throw Error("Cannot bind WebGPU context");// avoid possible ts complain
-this.canvasContext=B,this.canvasTextureFormat=navigator.gpu.getPreferredCanvasFormat(),this.canvasContext.configure({device:this.device,format:this.canvasTextureFormat,alphaMode:"opaque"}),this.renderPipeline=A.createRenderPipeline({layout:"auto",vertex:{module:A.createShaderModule({code:R}),entryPoint:"vertex_main"},fragment:{module:A.createShaderModule({code:R}),entryPoint:"fragment_main",targets:[{format:this.canvasTextureFormat}]},primitive:{topology:"triangle-list"}}),this.clearPipeline=A.createRenderPipeline({layout:"auto",vertex:{module:A.createShaderModule({code:R}),entryPoint:"vertex_main"},fragment:{module:A.createShaderModule({code:R}),entryPoint:"fragment_clear",targets:[{format:this.canvasTextureFormat}]},primitive:{topology:"triangle-list"}}),this.renderSampler=A.createSampler({magFilter:"linear",minFilter:"linear"}),// staging texture always be in RGBA
-this.stagingTexture=A.createTexture({size:[Q.height,Q.width,1],format:"rgba8unorm",usage:GPUTextureUsage.TEXTURE_BINDING|GPUTextureUsage.COPY_DST|GPUTextureUsage.RENDER_ATTACHMENT})}clear(){let A=this.device.createCommandEncoder(),Q=A.beginRenderPass({colorAttachments:[{view:this.canvasContext.getCurrentTexture().createView(),clearValue:{r:0,g:0,b:0,a:1},loadOp:"clear",storeOp:"store"}]});Q.setPipeline(this.clearPipeline);let B=this.device.createBindGroup({layout:this.renderPipeline.getBindGroupLayout(0),entries:[{binding:0,resource:this.renderSampler},{binding:1,resource:this.stagingTexture.createView()}]});Q.setBindGroup(0,B),Q.draw(6,1,0,0),Q.end(),this.device.queue.submit([A.finish()])}draw(A,Q,B){(Q!=this.stagingTexture.height||B!=this.stagingTexture.width)&&(this.stagingTexture.destroy(),this.stagingTexture=this.device.createTexture({size:[Q,B,1],format:"rgba8unorm",usage:GPUTextureUsage.TEXTURE_BINDING|GPUTextureUsage.COPY_DST|GPUTextureUsage.RENDER_ATTACHMENT}));let g=this.device.createCommandEncoder();g.copyBufferToTexture({buffer:A,offset:0,bytesPerRow:4*this.stagingTexture.width},{texture:this.stagingTexture},{width:this.stagingTexture.width,height:this.stagingTexture.height});let I=g.beginRenderPass({colorAttachments:[{view:this.canvasContext.getCurrentTexture().createView(),clearValue:{r:0,g:0,b:0,a:1},loadOp:"clear",storeOp:"store"}]});I.setPipeline(this.renderPipeline);let C=this.device.createBindGroup({layout:this.renderPipeline.getBindGroupLayout(0),entries:[{binding:0,resource:this.renderSampler},{binding:1,resource:this.stagingTexture.createView()}]});I.setBindGroup(0,C),I.draw(6,1,0,0),I.end(),this.device.queue.submit([g.finish()])}dispose(){this.stagingTexture.destroy()}}/**
- * WebGPU context
- * Manages all the webgpu resources here.
- */class K{constructor(A,Q){// internal data
-this.bufferTable=[void 0],this.bufferTableFreeId=[],this.podArgStagingBuffers=[],this.canvasRenderManager=void 0,// number of pod arg staging buffers
-this.maxNumPodArgsStagingBuffers=2,// flags for debugging
-// stats of the runtime.
-// peak allocation
-this.peakAllocatedBytes=0,// current allocation
-this.currAllocatedBytes=0,// all allocation(ignoring free)
-this.allAllocatedBytes=0,// shader submit counter
-this.shaderSubmitCounter=0,// limite number of shaders to be submitted, useful for debugging, default to -1
-this.debugShaderSubmitLimit=-1,// log and sync each step
-this.debugLogFinish=!1,this.memory=A,this.device=Q}/**
- * Dispose context.
- */dispose(){var A,Q,B;for(null===(A=this.canvasRenderManager)||void 0===A||A.dispose(),this.bufferTableFreeId=[];0!=this.bufferTable.length;)null===(Q=this.bufferTable.pop())||void 0===Q||Q.destroy();for(;0!=this.podArgStagingBuffers.length;)null===(B=this.podArgStagingBuffers.pop())||void 0===B||B.destroy();this.device.destroy()}/**
- * Wait for all pending GPU tasks to complete
- */sync(){return o(this,void 0,void 0,function*(){yield this.device.queue.onSubmittedWorkDone()})}/**
- * Obtain the runtime information in readable format.
- */runtimeStatsText(){return"peak-memory="+Math.ceil(this.peakAllocatedBytes/1048576)+" MB"+(", all-memory="+Math.ceil(this.allAllocatedBytes/1048576)+" MB, shader-submissions=")+this.shaderSubmitCounter}/**
- * Draw image from data in storage buffer.
- * @param ptr The GPU ptr
- * @param height The height of the image.
- * @param width The width of the image.
- */drawImageFromBuffer(A,Q,B){if(void 0==this.canvasRenderManager)throw Error("Do not have a canvas context, call bindCanvas first");this.canvasRenderManager.draw(this.gpuBufferFromPtr(A),Q,B)}/**
- * Copy raw bytes into buffer ptr.
- *
- * @param rawBytes The raw bytes
- * @param toPtr The target gpu buffer ptr
- * @param toOffset The beginning offset
- * @param nbytes Number of bytes
- */copyRawBytesToBuffer(A,Q,B,g){// Perhaps it would be more useful to use a staging buffer?
-this.device.queue.writeBuffer(this.gpuBufferFromPtr(Q),B,A,0,g)}/**
- * Clear canvas
- */clearCanvas(){var A;null===(A=this.canvasRenderManager)||void 0===A||A.clear()}/**
- * Bind a canvas element to the runtime.
- * @param canvas The HTML canvas/
- */bindCanvas(A){this.canvasRenderManager=new y(this.device,A)}/**
- * Create a PackedFunc that runs the given shader
- * via createComputePipeline
- *
- * @param info The function information already parsed as a record.
- * @param code The shader data(in WGSL)
- * @returns The shader
- */createShader(A,Q){return this.createShadeInternal(A,Q,!1)}/**
- * Create a PackedFunc that runs the given shader asynchronously
- * via createComputePipelineAsync
- *
- * @param info The function information already parsed as a record.
- * @param code The shader data(in WGSL)
- * @returns The shader
- */createShaderAsync(A,Q){return o(this,void 0,void 0,function*(){return yield this.createShadeInternal(A,Q,!0)})}/**
- * Get the pod arg staging buffer
- * \param nbytes The minimum size.
- * \return The allocated buffer
- */getPodArgsBuffer(A){let Q;this.podArgStagingBuffers.length>=this.maxNumPodArgsStagingBuffers&&(Q=this.podArgStagingBuffers.shift());// minimum of 16 bytes
-let B=16;for(void 0!==Q&&(B=Q.size,Q.size=0&&A<3),g.push(A)}else if(B.startsWith("threadIdx.")){let A=B.charCodeAt(B.length-1)-120;h(A>=0&&A<3),g.push(A+3)}else if(B.startsWith("paramWriteAccess:"))I=JSON.parse(B.substring(17));else throw Error("Cannot handle thread_axis "+B)}let C=[],E=[],D=[];for(let Q=0;Q(...B)=>{if(-1!=this.debugShaderSubmitLimit&&this.shaderSubmitCounter>=this.debugShaderSubmitLimit){this.shaderSubmitCounter+=1;return}let I=this.device.createCommandEncoder(),C=I.beginComputePass();C.setPipeline(Q);let o=[],N=E.length+D.length;h(B.length==N+g.length);let i=[1,1,1,1,1,1];for(let A=0;A=65536){let A=i[0],Q=i[2];for(;A>=65536;)A%2==0?A/=2:A=(A+1)/2,Q*=2;i[0]=A,i[2]=Q,h(A*Q>=G)}for(let A=0;A{console.log("["+Q+"][Debug] finish shader"+A.name)})}this.shaderSubmitCounter+=1},i=this.device.createShaderModule({code:Q,hints:{main:{layout:o}}});if(B)return this.device.createComputePipelineAsync({layout:o,compute:{module:i,entryPoint:A.name}}).then(A=>N(A));{let Q=this.device.createComputePipeline({layout:o,compute:{module:i,entryPoint:A.name}});return N(Q)}}/**
- * Get the device API according to its name
- * @param The name of the API.
- * @returns The corresponding device api.
- */getDeviceAPI(A){if("deviceAllocDataSpace"==A)return A=>this.deviceAllocDataSpace(A);if("deviceFreeDataSpace"==A)return A=>this.deviceFreeDataSpace(A);if("deviceCopyToGPU"==A)return(A,Q,B,g)=>{this.deviceCopyToGPU(A,Q,B,g)};if("deviceCopyFromGPU"==A)return(A,Q,B,g)=>{this.deviceCopyFromGPU(A,Q,B,g)};if("deviceCopyWithinGPU"==A)return(A,Q,B,g,I)=>{this.deviceCopyWithinGPU(A,Q,B,g,I)};throw Error("Unknown DeviceAPI function "+A)}// DeviceAPI
-deviceAllocDataSpace(A){0==A&&(A=1);let Q=this.device.createBuffer({size:A,usage:GPUBufferUsage.STORAGE|GPUBufferUsage.COPY_SRC|GPUBufferUsage.COPY_DST});this.currAllocatedBytes+=A,this.allAllocatedBytes+=A,this.currAllocatedBytes>this.peakAllocatedBytes&&(this.peakAllocatedBytes=this.currAllocatedBytes);let B=this.attachToBufferTable(Q);return B}deviceFreeDataSpace(A){let Q=this.bufferTable[A];this.bufferTable[A]=void 0,h(void 0!==Q),this.bufferTableFreeId.push(A),this.currAllocatedBytes-=Q.size,Q.destroy()}deviceCopyToGPU(A,Q,B,g){// Perhaps it would be more useful to use a staging buffer?
-let I=this.memory.loadRawBytes(A,g);this.device.queue.writeBuffer(this.gpuBufferFromPtr(Q),B,I,0,g)}deviceCopyFromGPU(A,Q,B,g){// Perhaps it would be more useful to resuse a staging buffer?
-let I=this.device.createBuffer({size:g,usage:GPUBufferUsage.MAP_READ|GPUBufferUsage.COPY_DST}),C=this.device.createCommandEncoder();C.copyBufferToBuffer(this.gpuBufferFromPtr(A),Q,I,0,g);let E=C.finish();this.device.queue.submit([E]),I.mapAsync(GPUMapMode.READ).then(()=>{let A=I.getMappedRange();this.memory.storeRawBytes(B,new Uint8Array(A)),I.destroy()})}deviceCopyWithinGPU(A,Q,B,g,I){let C=this.device.createCommandEncoder();C.copyBufferToBuffer(this.gpuBufferFromPtr(A),Q,this.gpuBufferFromPtr(B),g,I);let E=C.finish();this.device.queue.submit([E])}gpuBufferFromPtr(A){let Q=this.bufferTable[A];return h(void 0!==Q),Q}attachToBufferTable(A){if(0!=this.bufferTableFreeId.length){let Q=this.bufferTableFreeId.pop();return this.bufferTable[Q]=A,Q}{let Q=this.bufferTable.length;return this.bufferTable.push(A),Q}}}function L(){var A,Q,B,g,I,C,E,D,w,o,N,i,F,h,M,U,s=void 0!==s?s:{},k={};k.start=function(A){k.successCallback(A)};var s={instantiateWasm:function(A,Q){k.imports=A,k.successCallback=Q},wasmLibraryProvider:k},R=Object.assign({},s),y=[],K="./this.program",L=(A,Q)=>{throw Q},c="object"==typeof window,J="function"==typeof importScripts,a="object"==typeof Y&&"object"==typeof Y.versions&&"string"==typeof Y.versions.node,H="";if(a){var S=Ah;H=J?Ah.dirname(H)+"/":G+"/",Q=(A,Q)=>(A=V(A)?new URL(A):Ah.normalize(A),S.readFileSync(A,Q?void 0:"utf8")),g=A=>{var B=Q(A,!0);return B.buffer||(B=new Uint8Array(B)),B},B=(A,Q,B)=>{A=V(A)?new URL(A):Ah.normalize(A),S.readFile(A,function(A,g){A?B(A):Q(g.buffer)})},Y.argv.length>1&&(K=Y.argv[1].replace(/\\/g,"/")),y=Y.argv.slice(2),AF.exports=s,Y.on("uncaughtException",function(A){if(!(A instanceof _))throw A}),Y.versions.node.split(".")[0]<15&&Y.on("unhandledRejection",function(A){throw A}),L=(A,Q)=>{if(f)throw Y.exitCode=A,Q;(function(A){if(A instanceof _)return;let Q=A;A&&"object"==typeof A&&A.stack&&(Q=[A,A.stack]),O("exiting due to exception: "+Q)})(Q),Y.exit(A)},s.inspect=function(){return"[Emscripten Module object]"}}else(c||J)&&(J?H=self.location.href:"undefined"!=typeof document&&document.currentScript&&(H=document.currentScript.src),H=0!==H.indexOf("blob:")?H.substr(0,H.replace(/[?#].*/,"").lastIndexOf("/")+1):"",Q=A=>{var Q=new XMLHttpRequest;return Q.open("GET",A,!1),Q.send(null),Q.responseText},J&&(g=A=>{var Q=new XMLHttpRequest;return Q.open("GET",A,!1),Q.responseType="arraybuffer",Q.send(null),new Uint8Array(Q.response)}),B=(A,Q,B)=>{var g=new XMLHttpRequest;g.open("GET",A,!0),g.responseType="arraybuffer",g.onload=()=>{if(200==g.status||0==g.status&&g.response){Q(g.response);return}B()},g.onerror=B,g.send(null)});var t=s.print||console.log.bind(console),O=s.printErr||console.warn.bind(console);Object.assign(s,R),R=null,s.arguments&&(y=s.arguments),s.thisProgram&&(K=s.thisProgram),s.quit&&(L=s.quit),s.wasmBinary&&(I=s.wasmBinary);var f=s.noExitRuntime||!0;"object"!=typeof WebAssembly&&u("no native wasm support detected");var e=!1,Z="undefined"!=typeof TextDecoder?new TextDecoder("utf8"):void 0;function n(A,Q,B){for(var g=Q+B,I=Q;A[I]&&!(I>=g);)++I;if(I-Q>16&&A.buffer&&Z)return Z.decode(A.subarray(Q,I));for(var C="";Q>10,56320|1023&o)}}return C}function m(A,Q,B,g){if(!(g>0))return 0;for(var I=B,C=B+g-1,E=0;E=55296&&D<=57343&&(D=65536+((1023&D)<<10)|1023&A.charCodeAt(++E)),D<=127){if(B>=C)break;Q[B++]=D}else if(D<=2047){if(B+1>=C)break;Q[B++]=192|D>>6,Q[B++]=128|63&D}else if(D<=65535){if(B+2>=C)break;Q[B++]=224|D>>12,Q[B++]=128|D>>6&63,Q[B++]=128|63&D}else{if(B+3>=C)break;Q[B++]=240|D>>18,Q[B++]=128|D>>12&63,Q[B++]=128|D>>6&63,Q[B++]=128|63&D}}return Q[B]=0,B-I}function j(A){for(var Q=0,B=0;B=55296&&g<=57343?(Q+=4,++B):Q+=3}return Q}function q(){var A=C.buffer;s.HEAP8=D=new Int8Array(A),s.HEAP16=new Int16Array(A),s.HEAP32=o=new Int32Array(A),s.HEAPU8=w=new Uint8Array(A),s.HEAPU16=new Uint16Array(A),s.HEAPU32=N=new Uint32Array(A),s.HEAPF32=new Float32Array(A),s.HEAPF64=new Float64Array(A),s.HEAP64=new BigInt64Array(A),s.HEAPU64=new BigUint64Array(A)}var p=[],d=[],z=[],x=[],T=0,P=null;function l(A){T++,s.monitorRunDependencies&&s.monitorRunDependencies(T)}function W(A){if(T--,s.monitorRunDependencies&&s.monitorRunDependencies(T),0==T&&P){var Q=P;P=null,Q()}}function u(A){throw s.onAbort&&s.onAbort(A),O(A="Aborted("+A+")"),e=!0,E=1,A+=". Build with -sASSERTIONS for more info.",new WebAssembly.RuntimeError(A)}function b(A){return A.startsWith("data:application/octet-stream;base64,")}function V(A){return A.startsWith("file://")}function v(A){try{if(A==i&&I)return new Uint8Array(I);if(g)return g(A);throw"both async and sync fetching of the wasm failed"}catch(A){u(A)}}function X(A,Q,g){return(function(A){if(!I&&(c||J)){if("function"==typeof fetch&&!V(A))return fetch(A,{credentials:"same-origin"}).then(function(Q){if(!Q.ok)throw"failed to load wasm binary file at '"+A+"'";return Q.arrayBuffer()}).catch(function(){return v(A)});if(B)return new Promise(function(Q,g){B(A,function(A){Q(new Uint8Array(A))},g)})}return Promise.resolve().then(function(){return v(A)})})(A).then(function(A){return WebAssembly.instantiate(A,Q)}).then(function(A){return A}).then(g,function(A){O("failed to asynchronously prepare wasm: "+A),u(A)})}function _(A){this.name="ExitStatus",this.message="Program terminated with exit("+A+")",this.status=A}function $(A){for(;A.length>0;)A.shift()(s)}b(i="tvmjs_runtime.wasm")||(A=i,i=s.locateFile?s.locateFile(A,H):H+A),M=a?()=>{var A=Y.hrtime();return 1e3*A[0]+A[1]/1e6}:()=>performance.now();var AA={isAbs:A=>"/"===A.charAt(0),splitPath:A=>/^(\/?|)([\s\S]*?)((?:\.{1,2}|[^\/]+?|)(\.[^.\/]*|))(?:[\/]*)$/.exec(A).slice(1),normalizeArray:(A,Q)=>{for(var B=0,g=A.length-1;g>=0;g--){var I=A[g];"."===I?A.splice(g,1):".."===I?(A.splice(g,1),B++):B&&(A.splice(g,1),B--)}if(Q)for(;B;B--)A.unshift("..");return A},normalize:A=>{var Q=AA.isAbs(A),B="/"===A.substr(-1);return(A=AA.normalizeArray(A.split("/").filter(A=>!!A),!Q).join("/"))||Q||(A="."),A&&B&&(A+="/"),(Q?"/":"")+A},dirname:A=>{var Q=AA.splitPath(A),B=Q[0],g=Q[1];return B||g?(g&&(g=g.substr(0,g.length-1)),B+g):"."},basename:A=>{if("/"===A)return"/";var Q=(A=(A=AA.normalize(A)).replace(/\/$/,"")).lastIndexOf("/");return -1===Q?A:A.substr(Q+1)},join:function(){var A=Array.prototype.slice.call(arguments);return AA.normalize(A.join("/"))},join2:(A,Q)=>AA.normalize(A+"/"+Q)},AQ={resolve:function(){for(var A="",Q=!1,B=arguments.length-1;B>=-1&&!Q;B--){var g=B>=0?arguments[B]:AC.cwd();if("string"!=typeof g)throw TypeError("Arguments to path.resolve must be strings");if(!g)return"";A=g+"/"+A,Q=AA.isAbs(g)}return A=AA.normalizeArray(A.split("/").filter(A=>!!A),!Q).join("/"),(Q?"/":"")+A||"."},relative:(A,Q)=>{function B(A){for(var Q=0;Q=0&&""===A[B];B--);return Q>B?[]:A.slice(Q,B-Q+1)}A=AQ.resolve(A).substr(1),Q=AQ.resolve(Q).substr(1);for(var g=B(A.split("/")),I=B(Q.split("/")),C=Math.min(g.length,I.length),E=C,D=0;D0?B:j(A)+1),I=m(A,g,0,g.length);return Q&&(g.length=I),g}var Ag={ttys:[],init:function(){},shutdown:function(){},register:function(A,Q){Ag.ttys[A]={input:[],output:[],ops:Q},AC.registerDevice(A,Ag.stream_ops)},stream_ops:{open:function(A){var Q=Ag.ttys[A.node.rdev];if(!Q)throw new AC.ErrnoError(43);A.tty=Q,A.seekable=!1},close:function(A){A.tty.ops.fsync(A.tty)},fsync:function(A){A.tty.ops.fsync(A.tty)},read:function(A,Q,B,g,I){if(!A.tty||!A.tty.ops.get_char)throw new AC.ErrnoError(60);for(var C,E=0,D=0;D0?B.slice(0,g).toString("utf-8"):null}else"undefined"!=typeof window&&"function"==typeof window.prompt?null!==(Q=window.prompt("Input: "))&&(Q+="\n"):"function"==typeof readline&&null!==(Q=readline())&&(Q+="\n");if(!Q)return null;A.input=AB(Q,!0)}return A.input.shift()},put_char:function(A,Q){null===Q||10===Q?(t(n(A.output,0)),A.output=[]):0!=Q&&A.output.push(Q)},fsync:function(A){A.output&&A.output.length>0&&(t(n(A.output,0)),A.output=[])}},default_tty1_ops:{put_char:function(A,Q){null===Q||10===Q?(O(n(A.output,0)),A.output=[]):0!=Q&&A.output.push(Q)},fsync:function(A){A.output&&A.output.length>0&&(O(n(A.output,0)),A.output=[])}}},AI={ops_table:null,mount:function(A){return AI.createNode(null,"/",16895,0)},createNode:function(A,Q,B,g){if(AC.isBlkdev(B)||AC.isFIFO(B))throw new AC.ErrnoError(63);AI.ops_table||(AI.ops_table={dir:{node:{getattr:AI.node_ops.getattr,setattr:AI.node_ops.setattr,lookup:AI.node_ops.lookup,mknod:AI.node_ops.mknod,rename:AI.node_ops.rename,unlink:AI.node_ops.unlink,rmdir:AI.node_ops.rmdir,readdir:AI.node_ops.readdir,symlink:AI.node_ops.symlink},stream:{llseek:AI.stream_ops.llseek}},file:{node:{getattr:AI.node_ops.getattr,setattr:AI.node_ops.setattr},stream:{llseek:AI.stream_ops.llseek,read:AI.stream_ops.read,write:AI.stream_ops.write,allocate:AI.stream_ops.allocate,mmap:AI.stream_ops.mmap,msync:AI.stream_ops.msync}},link:{node:{getattr:AI.node_ops.getattr,setattr:AI.node_ops.setattr,readlink:AI.node_ops.readlink},stream:{}},chrdev:{node:{getattr:AI.node_ops.getattr,setattr:AI.node_ops.setattr},stream:AC.chrdev_stream_ops}});var I=AC.createNode(A,Q,B,g);return AC.isDir(I.mode)?(I.node_ops=AI.ops_table.dir.node,I.stream_ops=AI.ops_table.dir.stream,I.contents={}):AC.isFile(I.mode)?(I.node_ops=AI.ops_table.file.node,I.stream_ops=AI.ops_table.file.stream,I.usedBytes=0,I.contents=null):AC.isLink(I.mode)?(I.node_ops=AI.ops_table.link.node,I.stream_ops=AI.ops_table.link.stream):AC.isChrdev(I.mode)&&(I.node_ops=AI.ops_table.chrdev.node,I.stream_ops=AI.ops_table.chrdev.stream),I.timestamp=Date.now(),A&&(A.contents[Q]=I,A.timestamp=I.timestamp),I},getFileDataAsTypedArray:function(A){return A.contents?A.contents.subarray?A.contents.subarray(0,A.usedBytes):new Uint8Array(A.contents):new Uint8Array(0)},expandFileStorage:function(A,Q){var B=A.contents?A.contents.length:0;if(!(B>=Q)){Q=Math.max(Q,B*(B<1048576?2:1.125)>>>0),0!=B&&(Q=Math.max(Q,256));var g=A.contents;A.contents=new Uint8Array(Q),A.usedBytes>0&&A.contents.set(g.subarray(0,A.usedBytes),0)}},resizeFileStorage:function(A,Q){if(A.usedBytes!=Q){if(0==Q)A.contents=null,A.usedBytes=0;else{var B=A.contents;A.contents=new Uint8Array(Q),B&&A.contents.set(B.subarray(0,Math.min(Q,A.usedBytes))),A.usedBytes=Q}}},node_ops:{getattr:function(A){var Q={};return Q.dev=AC.isChrdev(A.mode)?A.id:1,Q.ino=A.id,Q.mode=A.mode,Q.nlink=1,Q.uid=0,Q.gid=0,Q.rdev=A.rdev,AC.isDir(A.mode)?Q.size=4096:AC.isFile(A.mode)?Q.size=A.usedBytes:AC.isLink(A.mode)?Q.size=A.link.length:Q.size=0,Q.atime=new Date(A.timestamp),Q.mtime=new Date(A.timestamp),Q.ctime=new Date(A.timestamp),Q.blksize=4096,Q.blocks=Math.ceil(Q.size/Q.blksize),Q},setattr:function(A,Q){void 0!==Q.mode&&(A.mode=Q.mode),void 0!==Q.timestamp&&(A.timestamp=Q.timestamp),void 0!==Q.size&&AI.resizeFileStorage(A,Q.size)},lookup:function(A,Q){throw AC.genericErrors[44]},mknod:function(A,Q,B,g){return AI.createNode(A,Q,B,g)},rename:function(A,Q,B){if(AC.isDir(A.mode)){var g;try{g=AC.lookupNode(Q,B)}catch(A){}if(g)for(var I in g.contents)throw new AC.ErrnoError(55)}delete A.parent.contents[A.name],A.parent.timestamp=Date.now(),A.name=B,Q.contents[B]=A,Q.timestamp=A.parent.timestamp,A.parent=Q},unlink:function(A,Q){delete A.contents[Q],A.timestamp=Date.now()},rmdir:function(A,Q){var B=AC.lookupNode(A,Q);for(var g in B.contents)throw new AC.ErrnoError(55);delete A.contents[Q],A.timestamp=Date.now()},readdir:function(A){var Q=[".",".."];for(var B in A.contents)A.contents.hasOwnProperty(B)&&Q.push(B);return Q},symlink:function(A,Q,B){var g=AI.createNode(A,Q,41471,0);return g.link=B,g},readlink:function(A){if(!AC.isLink(A.mode))throw new AC.ErrnoError(28);return A.link}},stream_ops:{read:function(A,Q,B,g,I){var C=A.node.contents;if(I>=A.node.usedBytes)return 0;var E=Math.min(A.node.usedBytes-I,g);if(E>8&&C.subarray)Q.set(C.subarray(I,I+E),B);else for(var D=0;D0||B+Q{if(!(A=AQ.resolve(A)))return{path:"",node:null};if((Q=Object.assign({follow_mount:!0,recurse_count:0},Q)).recurse_count>8)throw new AC.ErrnoError(32);for(var B=A.split("/").filter(A=>!!A),g=AC.root,I="/",C=0;C40)throw new AC.ErrnoError(32)}}return{path:I,node:g}},getPath:A=>{for(var Q;;){if(AC.isRoot(A)){var B=A.mount.mountpoint;if(!Q)return B;return"/"!==B[B.length-1]?B+"/"+Q:B+Q}Q=Q?A.name+"/"+Q:A.name,A=A.parent}},hashName:(A,Q)=>{for(var B=0,g=0;g>>0)%AC.nameTable.length},hashAddNode:A=>{var Q=AC.hashName(A.parent.id,A.name);A.name_next=AC.nameTable[Q],AC.nameTable[Q]=A},hashRemoveNode:A=>{var Q=AC.hashName(A.parent.id,A.name);if(AC.nameTable[Q]===A)AC.nameTable[Q]=A.name_next;else for(var B=AC.nameTable[Q];B;){if(B.name_next===A){B.name_next=A.name_next;break}B=B.name_next}},lookupNode:(A,Q)=>{var B=AC.mayLookup(A);if(B)throw new AC.ErrnoError(B,A);for(var g=AC.hashName(A.id,Q),I=AC.nameTable[g];I;I=I.name_next){var C=I.name;if(I.parent.id===A.id&&C===Q)return I}return AC.lookup(A,Q)},createNode:(A,Q,B,g)=>{var I=new AC.FSNode(A,Q,B,g);return AC.hashAddNode(I),I},destroyNode:A=>{AC.hashRemoveNode(A)},isRoot:A=>A===A.parent,isMountpoint:A=>!!A.mounted,isFile:A=>(61440&A)==32768,isDir:A=>(61440&A)==16384,isLink:A=>(61440&A)==40960,isChrdev:A=>(61440&A)==8192,isBlkdev:A=>(61440&A)==24576,isFIFO:A=>(61440&A)==4096,isSocket:A=>(49152&A)==49152,flagModes:{r:0,"r+":2,w:577,"w+":578,a:1089,"a+":1090},modeStringToFlags:A=>{var Q=AC.flagModes[A];if(void 0===Q)throw Error("Unknown file open mode: "+A);return Q},flagsToPermissionString:A=>{var Q=["r","w","rw"][3&A];return 512&A&&(Q+="w"),Q},nodePermissions:(A,Q)=>AC.ignorePermissions?0:Q.includes("r")&&!(292&A.mode)||Q.includes("w")&&!(146&A.mode)||Q.includes("x")&&!(73&A.mode)?2:0,mayLookup:A=>AC.nodePermissions(A,"x")||(A.node_ops.lookup?0:2),mayCreate:(A,Q)=>{try{return AC.lookupNode(A,Q),20}catch(A){}return AC.nodePermissions(A,"wx")},mayDelete:(A,Q,B)=>{try{g=AC.lookupNode(A,Q)}catch(A){return A.errno}var g,I=AC.nodePermissions(A,"wx");if(I)return I;if(B){if(!AC.isDir(g.mode))return 54;if(AC.isRoot(g)||AC.getPath(g)===AC.cwd())return 10}else if(AC.isDir(g.mode))return 31;return 0},mayOpen:(A,Q)=>A?AC.isLink(A.mode)?32:AC.isDir(A.mode)&&("r"!==AC.flagsToPermissionString(Q)||512&Q)?31:AC.nodePermissions(A,AC.flagsToPermissionString(Q)):44,MAX_OPEN_FDS:4096,nextfd:(A=0,Q=AC.MAX_OPEN_FDS)=>{for(var B=A;B<=Q;B++)if(!AC.streams[B])return B;throw new AC.ErrnoError(33)},getStream:A=>AC.streams[A],createStream:(A,Q,B)=>{AC.FSStream||(AC.FSStream=function(){this.shared={}},AC.FSStream.prototype={},Object.defineProperties(AC.FSStream.prototype,{object:{get:function(){return this.node},set:function(A){this.node=A}},isRead:{get:function(){return(2097155&this.flags)!=1}},isWrite:{get:function(){return(2097155&this.flags)!=0}},isAppend:{get:function(){return 1024&this.flags}},flags:{get:function(){return this.shared.flags},set:function(A){this.shared.flags=A}},position:{get:function(){return this.shared.position},set:function(A){this.shared.position=A}}})),A=Object.assign(new AC.FSStream,A);var g=AC.nextfd(Q,B);return A.fd=g,AC.streams[g]=A,A},closeStream:A=>{AC.streams[A]=null},chrdev_stream_ops:{open:A=>{var Q=AC.getDevice(A.node.rdev);A.stream_ops=Q.stream_ops,A.stream_ops.open&&A.stream_ops.open(A)},llseek:()=>{throw new AC.ErrnoError(70)}},major:A=>A>>8,minor:A=>255&A,makedev:(A,Q)=>A<<8|Q,registerDevice:(A,Q)=>{AC.devices[A]={stream_ops:Q}},getDevice:A=>AC.devices[A],getMounts:A=>{for(var Q=[],B=[A];B.length;){var g=B.pop();Q.push(g),B.push.apply(B,g.mounts)}return Q},syncfs:(A,Q)=>{"function"==typeof A&&(Q=A,A=!1),AC.syncFSRequests++,AC.syncFSRequests>1&&O("warning: "+AC.syncFSRequests+" FS.syncfs operations in flight at once, probably just doing extra work");var B=AC.getMounts(AC.root.mount),g=0;function I(A){return AC.syncFSRequests--,Q(A)}function C(A){if(A)return C.errored?void 0:(C.errored=!0,I(A));++g>=B.length&&I(null)}B.forEach(Q=>{if(!Q.type.syncfs)return C(null);Q.type.syncfs(Q,A,C)})},mount:(A,Q,B)=>{var g,I="/"===B,C=!B;if(I&&AC.root)throw new AC.ErrnoError(10);if(!I&&!C){var E=AC.lookupPath(B,{follow_mount:!1});if(B=E.path,g=E.node,AC.isMountpoint(g))throw new AC.ErrnoError(10);if(!AC.isDir(g.mode))throw new AC.ErrnoError(54)}var D={type:A,opts:Q,mountpoint:B,mounts:[]},w=A.mount(D);return w.mount=D,D.root=w,I?AC.root=w:g&&(g.mounted=D,g.mount&&g.mount.mounts.push(D)),w},unmount:A=>{var Q=AC.lookupPath(A,{follow_mount:!1});if(!AC.isMountpoint(Q.node))throw new AC.ErrnoError(28);var B=Q.node,g=B.mounted,I=AC.getMounts(g);Object.keys(AC.nameTable).forEach(A=>{for(var Q=AC.nameTable[A];Q;){var B=Q.name_next;I.includes(Q.mount)&&AC.destroyNode(Q),Q=B}}),B.mounted=null;var C=B.mount.mounts.indexOf(g);B.mount.mounts.splice(C,1)},lookup:(A,Q)=>A.node_ops.lookup(A,Q),mknod:(A,Q,B)=>{var g=AC.lookupPath(A,{parent:!0}).node,I=AA.basename(A);if(!I||"."===I||".."===I)throw new AC.ErrnoError(28);var C=AC.mayCreate(g,I);if(C)throw new AC.ErrnoError(C);if(!g.node_ops.mknod)throw new AC.ErrnoError(63);return g.node_ops.mknod(g,I,Q,B)},create:(A,Q)=>(Q=(void 0!==Q?Q:438)&4095|32768,AC.mknod(A,Q,0)),mkdir:(A,Q)=>(Q=(void 0!==Q?Q:511)&1023|16384,AC.mknod(A,Q,0)),mkdirTree:(A,Q)=>{for(var B=A.split("/"),g="",I=0;I(void 0===B&&(B=Q,Q=438),Q|=8192,AC.mknod(A,Q,B)),symlink:(A,Q)=>{if(!AQ.resolve(A))throw new AC.ErrnoError(44);var B=AC.lookupPath(Q,{parent:!0}).node;if(!B)throw new AC.ErrnoError(44);var g=AA.basename(Q),I=AC.mayCreate(B,g);if(I)throw new AC.ErrnoError(I);if(!B.node_ops.symlink)throw new AC.ErrnoError(63);return B.node_ops.symlink(B,g,A)},rename:(A,Q)=>{var B,g,I,C=AA.dirname(A),E=AA.dirname(Q),D=AA.basename(A),w=AA.basename(Q);if(B=AC.lookupPath(A,{parent:!0}).node,g=AC.lookupPath(Q,{parent:!0}).node,!B||!g)throw new AC.ErrnoError(44);if(B.mount!==g.mount)throw new AC.ErrnoError(75);var o=AC.lookupNode(B,D),N=AQ.relative(A,E);if("."!==N.charAt(0))throw new AC.ErrnoError(28);if("."!==(N=AQ.relative(Q,C)).charAt(0))throw new AC.ErrnoError(55);try{I=AC.lookupNode(g,w)}catch(A){}if(o!==I){var i=AC.isDir(o.mode),G=AC.mayDelete(B,D,i);if(G||(G=I?AC.mayDelete(g,w,i):AC.mayCreate(g,w)))throw new AC.ErrnoError(G);if(!B.node_ops.rename)throw new AC.ErrnoError(63);if(AC.isMountpoint(o)||I&&AC.isMountpoint(I))throw new AC.ErrnoError(10);if(g!==B&&(G=AC.nodePermissions(B,"w")))throw new AC.ErrnoError(G);AC.hashRemoveNode(o);try{B.node_ops.rename(o,g,w)}catch(A){throw A}finally{AC.hashAddNode(o)}}},rmdir:A=>{var Q=AC.lookupPath(A,{parent:!0}).node,B=AA.basename(A),g=AC.lookupNode(Q,B),I=AC.mayDelete(Q,B,!0);if(I)throw new AC.ErrnoError(I);if(!Q.node_ops.rmdir)throw new AC.ErrnoError(63);if(AC.isMountpoint(g))throw new AC.ErrnoError(10);Q.node_ops.rmdir(Q,B),AC.destroyNode(g)},readdir:A=>{var Q=AC.lookupPath(A,{follow:!0}).node;if(!Q.node_ops.readdir)throw new AC.ErrnoError(54);return Q.node_ops.readdir(Q)},unlink:A=>{var Q=AC.lookupPath(A,{parent:!0}).node;if(!Q)throw new AC.ErrnoError(44);var B=AA.basename(A),g=AC.lookupNode(Q,B),I=AC.mayDelete(Q,B,!1);if(I)throw new AC.ErrnoError(I);if(!Q.node_ops.unlink)throw new AC.ErrnoError(63);if(AC.isMountpoint(g))throw new AC.ErrnoError(10);Q.node_ops.unlink(Q,B),AC.destroyNode(g)},readlink:A=>{var Q=AC.lookupPath(A).node;if(!Q)throw new AC.ErrnoError(44);if(!Q.node_ops.readlink)throw new AC.ErrnoError(28);return AQ.resolve(AC.getPath(Q.parent),Q.node_ops.readlink(Q))},stat:(A,Q)=>{var B=AC.lookupPath(A,{follow:!Q}).node;if(!B)throw new AC.ErrnoError(44);if(!B.node_ops.getattr)throw new AC.ErrnoError(63);return B.node_ops.getattr(B)},lstat:A=>AC.stat(A,!0),chmod:(A,Q,B)=>{var g;if(!(g="string"==typeof A?AC.lookupPath(A,{follow:!B}).node:A).node_ops.setattr)throw new AC.ErrnoError(63);g.node_ops.setattr(g,{mode:4095&Q|-4096&g.mode,timestamp:Date.now()})},lchmod:(A,Q)=>{AC.chmod(A,Q,!0)},fchmod:(A,Q)=>{var B=AC.getStream(A);if(!B)throw new AC.ErrnoError(8);AC.chmod(B.node,Q)},chown:(A,Q,B,g)=>{var I;if(!(I="string"==typeof A?AC.lookupPath(A,{follow:!g}).node:A).node_ops.setattr)throw new AC.ErrnoError(63);I.node_ops.setattr(I,{timestamp:Date.now()})},lchown:(A,Q,B)=>{AC.chown(A,Q,B,!0)},fchown:(A,Q,B)=>{var g=AC.getStream(A);if(!g)throw new AC.ErrnoError(8);AC.chown(g.node,Q,B)},truncate:(A,Q)=>{if(Q<0)throw new AC.ErrnoError(28);if("string"==typeof A){var B;B=AC.lookupPath(A,{follow:!0}).node}else B=A;if(!B.node_ops.setattr)throw new AC.ErrnoError(63);if(AC.isDir(B.mode))throw new AC.ErrnoError(31);if(!AC.isFile(B.mode))throw new AC.ErrnoError(28);var g=AC.nodePermissions(B,"w");if(g)throw new AC.ErrnoError(g);B.node_ops.setattr(B,{size:Q,timestamp:Date.now()})},ftruncate:(A,Q)=>{var B=AC.getStream(A);if(!B)throw new AC.ErrnoError(8);if((2097155&B.flags)==0)throw new AC.ErrnoError(28);AC.truncate(B.node,Q)},utime:(A,Q,B)=>{var g=AC.lookupPath(A,{follow:!0}).node;g.node_ops.setattr(g,{timestamp:Math.max(Q,B)})},open:(A,Q,B)=>{if(""===A)throw new AC.ErrnoError(44);if(Q="string"==typeof Q?AC.modeStringToFlags(Q):Q,B=void 0===B?438:B,B=64&Q?4095&B|32768:0,"object"==typeof A)g=A;else{A=AA.normalize(A);try{var g;g=AC.lookupPath(A,{follow:!(131072&Q)}).node}catch(A){}}var I=!1;if(64&Q){if(g){if(128&Q)throw new AC.ErrnoError(20)}else g=AC.mknod(A,B,0),I=!0}if(!g)throw new AC.ErrnoError(44);if(AC.isChrdev(g.mode)&&(Q&=-513),65536&Q&&!AC.isDir(g.mode))throw new AC.ErrnoError(54);if(!I){var C=AC.mayOpen(g,Q);if(C)throw new AC.ErrnoError(C)}512&Q&&!I&&AC.truncate(g,0),Q&=-131713;var E=AC.createStream({node:g,path:AC.getPath(g),flags:Q,seekable:!0,position:0,stream_ops:g.stream_ops,ungotten:[],error:!1});return E.stream_ops.open&&E.stream_ops.open(E),!s.logReadFiles||1&Q||(AC.readFiles||(AC.readFiles={}),A in AC.readFiles||(AC.readFiles[A]=1)),E},close:A=>{if(AC.isClosed(A))throw new AC.ErrnoError(8);A.getdents&&(A.getdents=null);try{A.stream_ops.close&&A.stream_ops.close(A)}catch(A){throw A}finally{AC.closeStream(A.fd)}A.fd=null},isClosed:A=>null===A.fd,llseek:(A,Q,B)=>{if(AC.isClosed(A))throw new AC.ErrnoError(8);if(!A.seekable||!A.stream_ops.llseek)throw new AC.ErrnoError(70);if(0!=B&&1!=B&&2!=B)throw new AC.ErrnoError(28);return A.position=A.stream_ops.llseek(A,Q,B),A.ungotten=[],A.position},read:(A,Q,B,g,I)=>{if(g<0||I<0)throw new AC.ErrnoError(28);if(AC.isClosed(A)||(2097155&A.flags)==1)throw new AC.ErrnoError(8);if(AC.isDir(A.node.mode))throw new AC.ErrnoError(31);if(!A.stream_ops.read)throw new AC.ErrnoError(28);var C=void 0!==I;if(C){if(!A.seekable)throw new AC.ErrnoError(70)}else I=A.position;var E=A.stream_ops.read(A,Q,B,g,I);return C||(A.position+=E),E},write:(A,Q,B,g,I,C)=>{if(g<0||I<0)throw new AC.ErrnoError(28);if(AC.isClosed(A)||(2097155&A.flags)==0)throw new AC.ErrnoError(8);if(AC.isDir(A.node.mode))throw new AC.ErrnoError(31);if(!A.stream_ops.write)throw new AC.ErrnoError(28);A.seekable&&1024&A.flags&&AC.llseek(A,0,2);var E=void 0!==I;if(E){if(!A.seekable)throw new AC.ErrnoError(70)}else I=A.position;var D=A.stream_ops.write(A,Q,B,g,I,C);return E||(A.position+=D),D},allocate:(A,Q,B)=>{if(AC.isClosed(A))throw new AC.ErrnoError(8);if(Q<0||B<=0)throw new AC.ErrnoError(28);if((2097155&A.flags)==0)throw new AC.ErrnoError(8);if(!AC.isFile(A.node.mode)&&!AC.isDir(A.node.mode))throw new AC.ErrnoError(43);if(!A.stream_ops.allocate)throw new AC.ErrnoError(138);A.stream_ops.allocate(A,Q,B)},mmap:(A,Q,B,g,I)=>{if((2&g)!=0&&(2&I)==0&&(2097155&A.flags)!=2||(2097155&A.flags)==1)throw new AC.ErrnoError(2);if(!A.stream_ops.mmap)throw new AC.ErrnoError(43);return A.stream_ops.mmap(A,Q,B,g,I)},msync:(A,Q,B,g,I)=>A.stream_ops.msync?A.stream_ops.msync(A,Q,B,g,I):0,munmap:A=>0,ioctl:(A,Q,B)=>{if(!A.stream_ops.ioctl)throw new AC.ErrnoError(59);return A.stream_ops.ioctl(A,Q,B)},readFile:(A,Q={})=>{if(Q.flags=Q.flags||0,Q.encoding=Q.encoding||"binary","utf8"!==Q.encoding&&"binary"!==Q.encoding)throw Error('Invalid encoding type "'+Q.encoding+'"');var B,g=AC.open(A,Q.flags),I=AC.stat(A).size,C=new Uint8Array(I);return AC.read(g,C,0,I,0),"utf8"===Q.encoding?B=n(C,0):"binary"===Q.encoding&&(B=C),AC.close(g),B},writeFile:(A,Q,B={})=>{B.flags=B.flags||577;var g=AC.open(A,B.flags,B.mode);if("string"==typeof Q){var I=new Uint8Array(j(Q)+1),C=m(Q,I,0,I.length);AC.write(g,I,0,C,void 0,B.canOwn)}else if(ArrayBuffer.isView(Q))AC.write(g,Q,0,Q.byteLength,void 0,B.canOwn);else throw Error("Unsupported data type");AC.close(g)},cwd:()=>AC.currentPath,chdir:A=>{var Q=AC.lookupPath(A,{follow:!0});if(null===Q.node)throw new AC.ErrnoError(44);if(!AC.isDir(Q.node.mode))throw new AC.ErrnoError(54);var B=AC.nodePermissions(Q.node,"x");if(B)throw new AC.ErrnoError(B);AC.currentPath=Q.path},createDefaultDirectories:()=>{AC.mkdir("/tmp"),AC.mkdir("/home"),AC.mkdir("/home/web_user")},createDefaultDevices:()=>{AC.mkdir("/dev"),AC.registerDevice(AC.makedev(1,3),{read:()=>0,write:(A,Q,B,g,I)=>g}),AC.mkdev("/dev/null",AC.makedev(1,3)),Ag.register(AC.makedev(5,0),Ag.default_tty_ops),Ag.register(AC.makedev(6,0),Ag.default_tty1_ops),AC.mkdev("/dev/tty",AC.makedev(5,0)),AC.mkdev("/dev/tty1",AC.makedev(6,0));var A=function(){if("object"==typeof crypto&&"function"==typeof crypto.getRandomValues){var A=new Uint8Array(1);return()=>(crypto.getRandomValues(A),A[0])}if(a)try{return()=>Ah.randomBytes(1)[0]}catch(A){}return()=>u("randomDevice")}();AC.createDevice("/dev","random",A),AC.createDevice("/dev","urandom",A),AC.mkdir("/dev/shm"),AC.mkdir("/dev/shm/tmp")},createSpecialDirectories:()=>{AC.mkdir("/proc");var A=AC.mkdir("/proc/self");AC.mkdir("/proc/self/fd"),AC.mount({mount:()=>{var Q=AC.createNode(A,"fd",16895,73);return Q.node_ops={lookup:(A,Q)=>{var B=AC.getStream(+Q);if(!B)throw new AC.ErrnoError(8);var g={parent:null,mount:{mountpoint:"fake"},node_ops:{readlink:()=>B.path}};return g.parent=g,g}},Q}},{},"/proc/self/fd")},createStandardStreams:()=>{s.stdin?AC.createDevice("/dev","stdin",s.stdin):AC.symlink("/dev/tty","/dev/stdin"),s.stdout?AC.createDevice("/dev","stdout",null,s.stdout):AC.symlink("/dev/tty","/dev/stdout"),s.stderr?AC.createDevice("/dev","stderr",null,s.stderr):AC.symlink("/dev/tty1","/dev/stderr"),AC.open("/dev/stdin",0),AC.open("/dev/stdout",1),AC.open("/dev/stderr",1)},ensureErrnoError:()=>{AC.ErrnoError||(AC.ErrnoError=function(A,Q){this.name="ErrnoError",this.node=Q,this.setErrno=function(A){this.errno=A},this.setErrno(A),this.message="FS error"},AC.ErrnoError.prototype=Error(),AC.ErrnoError.prototype.constructor=AC.ErrnoError,[44].forEach(A=>{AC.genericErrors[A]=new AC.ErrnoError(A),AC.genericErrors[A].stack=""}))},staticInit:()=>{AC.ensureErrnoError(),AC.nameTable=Array(4096),AC.mount(AI,{},"/"),AC.createDefaultDirectories(),AC.createDefaultDevices(),AC.createSpecialDirectories(),AC.filesystems={MEMFS:AI}},init:(A,Q,B)=>{AC.init.initialized=!0,AC.ensureErrnoError(),s.stdin=A||s.stdin,s.stdout=Q||s.stdout,s.stderr=B||s.stderr,AC.createStandardStreams()},quit:()=>{AC.init.initialized=!1;for(var A=0;A{var B=0;return A&&(B|=365),Q&&(B|=146),B},findObject:(A,Q)=>{var B=AC.analyzePath(A,Q);return B.exists?B.object:null},analyzePath:(A,Q)=>{try{var B=AC.lookupPath(A,{follow:!Q});A=B.path}catch(A){}var g={isRoot:!1,exists:!1,error:0,name:null,path:null,object:null,parentExists:!1,parentPath:null,parentObject:null};try{var B=AC.lookupPath(A,{parent:!0});g.parentExists=!0,g.parentPath=B.path,g.parentObject=B.node,g.name=AA.basename(A),B=AC.lookupPath(A,{follow:!Q}),g.exists=!0,g.path=B.path,g.object=B.node,g.name=B.node.name,g.isRoot="/"===B.path}catch(A){g.error=A.errno}return g},createPath:(A,Q,B,g)=>{A="string"==typeof A?A:AC.getPath(A);for(var I=Q.split("/").reverse();I.length;){var C=I.pop();if(C){var E=AA.join2(A,C);try{AC.mkdir(E)}catch(A){}A=E}}return E},createFile:(A,Q,B,g,I)=>{var C=AA.join2("string"==typeof A?A:AC.getPath(A),Q),E=AC.getMode(g,I);return AC.create(C,E)},createDataFile:(A,Q,B,g,I,C)=>{var E=Q;A&&(A="string"==typeof A?A:AC.getPath(A),E=Q?AA.join2(A,Q):A);var D=AC.getMode(g,I),w=AC.create(E,D);if(B){if("string"==typeof B){for(var o=Array(B.length),N=0,i=B.length;N{var I=AA.join2("string"==typeof A?A:AC.getPath(A),Q),C=AC.getMode(!!B,!!g);AC.createDevice.major||(AC.createDevice.major=64);var E=AC.makedev(AC.createDevice.major++,0);return AC.registerDevice(E,{open:A=>{A.seekable=!1},close:A=>{g&&g.buffer&&g.buffer.length&&g(10)},read:(A,Q,g,I,C)=>{for(var E,D=0,w=0;w{for(var E=0;E{if(A.isDevice||A.isFolder||A.link||A.contents)return!0;if("undefined"!=typeof XMLHttpRequest)throw Error("Lazy loading should have been performed (contents set) in createLazyFile, but it was not. Lazy loading only works in web workers. Use --embed-file or --preload-file in emcc on the main thread.");if(Q)try{A.contents=AB(Q(A.url),!0),A.usedBytes=A.contents.length}catch(A){throw new AC.ErrnoError(29)}else throw Error("Cannot load without read() or XMLHttpRequest.")},createLazyFile:(A,Q,B,g,I)=>{function C(){this.lengthKnown=!1,this.chunks=[]}if(C.prototype.get=function(A){if(!(A>this.length-1)&&!(A<0)){var Q=A%this.chunkSize,B=A/this.chunkSize|0;return this.getter(B)[Q]}},C.prototype.setDataGetter=function(A){this.getter=A},C.prototype.cacheLength=function(){var A,Q=new XMLHttpRequest;if(Q.open("HEAD",B,!1),Q.send(null),!(Q.status>=200&&Q.status<300||304===Q.status))throw Error("Couldn't load "+B+". Status: "+Q.status);var g=Number(Q.getResponseHeader("Content-length")),I=(A=Q.getResponseHeader("Accept-Ranges"))&&"bytes"===A,C=(A=Q.getResponseHeader("Content-Encoding"))&&"gzip"===A,E=1048576;I||(E=g);var D=(A,Q)=>{if(A>Q)throw Error("invalid range ("+A+", "+Q+") or no bytes requested!");if(Q>g-1)throw Error("only "+g+" bytes available! programmer error!");var I=new XMLHttpRequest;if(I.open("GET",B,!1),g!==E&&I.setRequestHeader("Range","bytes="+A+"-"+Q),I.responseType="arraybuffer",I.overrideMimeType&&I.overrideMimeType("text/plain; charset=x-user-defined"),I.send(null),!(I.status>=200&&I.status<300||304===I.status))throw Error("Couldn't load "+B+". Status: "+I.status);return void 0!==I.response?new Uint8Array(I.response||[]):AB(I.responseText||"",!0)},w=this;w.setDataGetter(A=>{var Q=A*E,B=(A+1)*E-1;if(B=Math.min(B,g-1),void 0===w.chunks[A]&&(w.chunks[A]=D(Q,B)),void 0===w.chunks[A])throw Error("doXHR failed!");return w.chunks[A]}),(C||!g)&&(E=g=1,E=g=this.getter(0).length,t("LazyFiles on gzip forces download of the whole file when length is accessed")),this._length=g,this._chunkSize=E,this.lengthKnown=!0},"undefined"!=typeof XMLHttpRequest){if(!J)throw"Cannot do synchronous binary XHRs outside webworkers in modern browsers. Use --embed-file or --preload-file in emcc";var E=new C;Object.defineProperties(E,{length:{get:function(){return this.lengthKnown||this.cacheLength(),this._length}},chunkSize:{get:function(){return this.lengthKnown||this.cacheLength(),this._chunkSize}}});var w={isDevice:!1,contents:E}}else var w={isDevice:!1,url:B};var o=AC.createFile(A,Q,w,g,I);w.contents?o.contents=w.contents:w.url&&(o.contents=null,o.url=w.url),Object.defineProperties(o,{usedBytes:{get:function(){return this.contents.length}}});var N={};function i(A,Q,B,g,I){var C=A.node.contents;if(I>=C.length)return 0;var E=Math.min(C.length-I,g);if(C.slice)for(var D=0;D{var Q=o.stream_ops[A];N[A]=function(){return AC.forceLoadFile(o),Q.apply(null,arguments)}}),N.read=(A,Q,B,g,I)=>(AC.forceLoadFile(o),i(A,Q,B,g,I)),N.mmap=(A,Q,B,g,I)=>{AC.forceLoadFile(o);var C=void u();if(!C)throw new AC.ErrnoError(48);return i(A,D,C,Q,B),{ptr:C,allocated:!0}},o.stream_ops=N,o},createPreloadedFile:(A,Q,g,I,C,E,D,w,o,N)=>{var i,G,Y=Q?AQ.resolve(AA.join2(A,Q)):A;function F(B){function g(B){N&&N(),w||AC.createDataFile(A,Q,B,I,C,o),E&&E(),W()}Browser.handledByPreloadPlugin(B,Y,g,()=>{D&&D(),W()})||g(B)}l(),"string"==typeof g?(i=A=>F(A),G="al "+g,B(g,A=>{var Q;Q='Loading data file "'+g+'" failed (no arrayBuffer).',A||u(Q),i(new Uint8Array(A)),G&&W()},A=>{if(D)D();else throw'Loading data file "'+g+'" failed.'}),G&&l()):F(g)},indexedDB:()=>window.indexedDB||window.mozIndexedDB||window.webkitIndexedDB||window.msIndexedDB,DB_NAME:()=>"EM_FS_"+window.location.pathname,DB_VERSION:20,DB_STORE_NAME:"FILE_DATA",saveFilesToDB:(A,Q=()=>{},B=()=>{})=>{var g=AC.indexedDB();try{var I=g.open(AC.DB_NAME(),AC.DB_VERSION)}catch(A){return B(A)}I.onupgradeneeded=()=>{t("creating db"),I.result.createObjectStore(AC.DB_STORE_NAME)},I.onsuccess=()=>{var g=I.result.transaction([AC.DB_STORE_NAME],"readwrite"),C=g.objectStore(AC.DB_STORE_NAME),E=0,D=0,w=A.length;function o(){0==D?Q():B()}A.forEach(A=>{var Q=C.put(AC.analyzePath(A).object.contents,A);Q.onsuccess=()=>{++E+D==w&&o()},Q.onerror=()=>{D++,E+D==w&&o()}}),g.onerror=B},I.onerror=B},loadFilesFromDB:(A,Q=()=>{},B=()=>{})=>{var g=AC.indexedDB();try{var I=g.open(AC.DB_NAME(),AC.DB_VERSION)}catch(A){return B(A)}I.onupgradeneeded=B,I.onsuccess=()=>{var g=I.result;try{var C=g.transaction([AC.DB_STORE_NAME],"readonly")}catch(A){B(A);return}var E=C.objectStore(AC.DB_STORE_NAME),D=0,w=0,o=A.length;function N(){0==w?Q():B()}A.forEach(A=>{var Q=E.get(A);Q.onsuccess=()=>{AC.analyzePath(A).exists&&AC.unlink(A),AC.createDataFile(AA.dirname(A),AA.basename(A),Q.result,!0,!0,!0),++D+w==o&&N()},Q.onerror=()=>{w++,D+w==o&&N()}}),C.onerror=B},I.onerror=B}},AE={DEFAULT_POLLMASK:5,calculateAt:function(A,Q,B){if(AA.isAbs(Q))return Q;if(-100===A)g=AC.cwd();else{var g;g=AE.getStreamFromFD(A).path}if(0==Q.length){if(!B)throw new AC.ErrnoError(44);return g}return AA.join2(g,Q)},doStat:function(A,Q,B){try{var g=A(Q)}catch(A){if(A&&A.node&&AA.normalize(Q)!==AA.normalize(AC.getPath(A.node)))return -54;throw A}o[B>>2]=g.dev,o[B+8>>2]=g.ino,o[B+12>>2]=g.mode,N[B+16>>2]=g.nlink,o[B+20>>2]=g.uid,o[B+24>>2]=g.gid,o[B+28>>2]=g.rdev,h=[g.size>>>0,+Math.abs(F=g.size)>=1?F>0?(0|Math.min(+Math.floor(F/4294967296),4294967295))>>>0:~~+Math.ceil((F-+(~~F>>>0))/4294967296)>>>0:0],o[B+40>>2]=h[0],o[B+44>>2]=h[1],o[B+48>>2]=4096,o[B+52>>2]=g.blocks;var I=g.atime.getTime(),C=g.mtime.getTime(),E=g.ctime.getTime();return h=[Math.floor(I/1e3)>>>0,+Math.abs(F=Math.floor(I/1e3))>=1?F>0?(0|Math.min(+Math.floor(F/4294967296),4294967295))>>>0:~~+Math.ceil((F-+(~~F>>>0))/4294967296)>>>0:0],o[B+56>>2]=h[0],o[B+60>>2]=h[1],N[B+64>>2]=I%1e3*1e3,h=[Math.floor(C/1e3)>>>0,+Math.abs(F=Math.floor(C/1e3))>=1?F>0?(0|Math.min(+Math.floor(F/4294967296),4294967295))>>>0:~~+Math.ceil((F-+(~~F>>>0))/4294967296)>>>0:0],o[B+72>>2]=h[0],o[B+76>>2]=h[1],N[B+80>>2]=C%1e3*1e3,h=[Math.floor(E/1e3)>>>0,+Math.abs(F=Math.floor(E/1e3))>=1?F>0?(0|Math.min(+Math.floor(F/4294967296),4294967295))>>>0:~~+Math.ceil((F-+(~~F>>>0))/4294967296)>>>0:0],o[B+88>>2]=h[0],o[B+92>>2]=h[1],N[B+96>>2]=E%1e3*1e3,h=[g.ino>>>0,+Math.abs(F=g.ino)>=1?F>0?(0|Math.min(+Math.floor(F/4294967296),4294967295))>>>0:~~+Math.ceil((F-+(~~F>>>0))/4294967296)>>>0:0],o[B+104>>2]=h[0],o[B+108>>2]=h[1],0},doMsync:function(A,Q,B,g,I){if(!AC.isFile(Q.node.mode))throw new AC.ErrnoError(43);if(2&g)return 0;var C=w.slice(A,A+B);AC.msync(Q,C,I,B,g)},varargs:void 0,get:function(){return AE.varargs+=4,o[AE.varargs-4>>2]},getStr:function(A){return A?n(w,A,void 0):""},getStreamFromFD:function(A){var Q=AC.getStream(A);if(!Q)throw new AC.ErrnoError(8);return Q}},AD={};function Aw(){if(!Aw.strings){var A={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:("object"==typeof navigator&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:K||"./this.program"};for(var Q in AD)void 0===AD[Q]?delete A[Q]:A[Q]=AD[Q];var B=[];for(var Q in A)B.push(Q+"="+A[Q]);Aw.strings=B}return Aw.strings}function Ao(A){E=A,f||(s.onExit&&s.onExit(A),e=!0),L(A,new _(A))}var AN=function(A,Q,B,g){A||(A=this),this.parent=A,this.mount=A.mount,this.mounted=null,this.id=AC.nextInode++,this.name=Q,this.mode=B,this.node_ops={},this.stream_ops={},this.rdev=g};Object.defineProperties(AN.prototype,{read:{get:function(){return(365&this.mode)==365},set:function(A){A?this.mode|=365:this.mode&=-366}},write:{get:function(){return(146&this.mode)==146},set:function(A){A?this.mode|=146:this.mode&=-147}},isFolder:{get:function(){return AC.isDir(this.mode)}},isDevice:{get:function(){return AC.isChrdev(this.mode)}}}),AC.FSNode=AN,AC.staticInit();var Ai={TVMWasmPackedCFunc:function(){O("missing function: TVMWasmPackedCFunc"),u(-1)},TVMWasmPackedCFuncFinalizer:function(){O("missing function: TVMWasmPackedCFuncFinalizer"),u(-1)},_ZN3tvm7runtime9threading10NumThreadsEv:function(){O("missing function: _ZN3tvm7runtime9threading10NumThreadsEv"),u(-1)},_ZN3tvm7runtime9threading15ResetThreadPoolEv:function(){O("missing function: _ZN3tvm7runtime9threading15ResetThreadPoolEv"),u(-1)},clock_time_get:function(A,Q,B){if(!(0==A||1==A||2==A||3==A))return 28;var g=Math.round(1e6*(0===A?Date.now():M()));return o[B>>2]=g>>>0,o[B+4>>2]=g/4294967296>>>0,0},emscripten_notify_memory_growth:function(A){q()},environ_get:function(A,Q){var B=0;return Aw().forEach(function(g,I){var C=Q+B;N[A+4*I>>2]=C,function(A,Q,B){for(var g=0;g>0]=A.charCodeAt(g);D[Q>>0]=0}(g,C),B+=g.length+1}),0},environ_sizes_get:function(A,Q){var B=Aw();N[A>>2]=B.length;var g=0;return B.forEach(function(A){g+=A.length+1}),N[Q>>2]=g,0},fd_close:function(A){try{var Q=AE.getStreamFromFD(A);return AC.close(Q),0}catch(A){if(void 0===AC||"ErrnoError"!==A.name)throw A;return A.errno}},fd_read:function(A,Q,B,g){try{var I=AE.getStreamFromFD(A),C=function(A,Q,B,g){for(var I=0,C=0;C>2],w=N[Q+4>>2];Q+=8;var o=AC.read(A,D,E,w,g);if(o<0)return -1;if(I+=o,o>2]=C,0}catch(A){if(void 0===AC||"ErrnoError"!==A.name)throw A;return A.errno}},fd_seek:function(A,Q,B,g){try{if(Q=Q<-9007199254740992||Q>9007199254740992?NaN:Number(Q),isNaN(Q))return 61;var I=AE.getStreamFromFD(A);return AC.llseek(I,Q,B),h=[I.position>>>0,(F=I.position,+Math.abs(F)>=1?F>0?(0|Math.min(+Math.floor(F/4294967296),4294967295))>>>0:~~+Math.ceil((F-+(~~F>>>0))/4294967296)>>>0:0)],o[g>>2]=h[0],o[g+4>>2]=h[1],I.getdents&&0===Q&&0===B&&(I.getdents=null),0}catch(A){if(void 0===AC||"ErrnoError"!==A.name)throw A;return A.errno}},fd_write:function(A,Q,B,g){try{var I=AE.getStreamFromFD(A),C=function(A,Q,B,g){for(var I=0,C=0;C>2],w=N[Q+4>>2];Q+=8;var o=AC.write(A,D,E,w,g);if(o<0)return -1;I+=o,void 0!==g&&(g+=o)}return I}(I,Q,B);return N[g>>2]=C,0}catch(A){if(void 0===AC||"ErrnoError"!==A.name)throw A;return A.errno}},proc_exit:Ao};(function(){var A,Q,B,g={env:Ai,wasi_snapshot_preview1:Ai};function E(A,Q){var B=A.exports;return s.asm=B,C=s.asm.memory,q(),s.asm.__indirect_function_table,W(),B}if(l(),s.instantiateWasm)try{return s.instantiateWasm(g,E)}catch(A){return O("Module.instantiateWasm callback failed with error: "+A),!1}A=I,Q=i,B=function(A){E(A.instance)},A||"function"!=typeof WebAssembly.instantiateStreaming||b(Q)||V(Q)||a||"function"!=typeof fetch?X(Q,g,B):fetch(Q,{credentials:"same-origin"}).then(function(A){return WebAssembly.instantiateStreaming(A,g).then(B,function(A){return O("wasm streaming compile failed: "+A),O("falling back to ArrayBuffer instantiation"),X(Q,g,B)})})})(),s.__ZN3tvm7runtime17GetCustomTypeNameEh=function(){return(s.__ZN3tvm7runtime17GetCustomTypeNameEh=s.asm._ZN3tvm7runtime17GetCustomTypeNameEh).apply(null,arguments)},s.__ZN3tvm7runtime8Registry3GetERKNS0_6StringE=function(){return(s.__ZN3tvm7runtime8Registry3GetERKNS0_6StringE=s.asm._ZN3tvm7runtime8Registry3GetERKNS0_6StringE).apply(null,arguments)},s.__ZN3tvm7runtime23GetCustomTypeRegisteredEh=function(){return(s.__ZN3tvm7runtime23GetCustomTypeRegisteredEh=s.asm._ZN3tvm7runtime23GetCustomTypeRegisteredEh).apply(null,arguments)},s.__ZN3tvm7runtime19ParseCustomDatatypeERKNSt3__212basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEEPPKc=function(){return(s.__ZN3tvm7runtime19ParseCustomDatatypeERKNSt3__212basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEEPPKc=s.asm._ZN3tvm7runtime19ParseCustomDatatypeERKNSt3__212basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEEPPKc).apply(null,arguments)},s._TVMGetLastError=function(){return(s._TVMGetLastError=s.asm.TVMGetLastError).apply(null,arguments)},s._TVMAPISetLastError=function(){return(s._TVMAPISetLastError=s.asm.TVMAPISetLastError).apply(null,arguments)},s._TVMModLoadFromFile=function(){return(s._TVMModLoadFromFile=s.asm.TVMModLoadFromFile).apply(null,arguments)},s.__ZN3tvm7runtime6Module12LoadFromFileERKNS0_6StringES4_=function(){return(s.__ZN3tvm7runtime6Module12LoadFromFileERKNS0_6StringES4_=s.asm._ZN3tvm7runtime6Module12LoadFromFileERKNS0_6StringES4_).apply(null,arguments)},s._TVMModImport=function(){return(s._TVMModImport=s.asm.TVMModImport).apply(null,arguments)},s._TVMModGetFunction=function(){return(s._TVMModGetFunction=s.asm.TVMModGetFunction).apply(null,arguments)},s._TVMModFree=function(){return(s._TVMModFree=s.asm.TVMModFree).apply(null,arguments)},s._TVMObjectFree=function(){return(s._TVMObjectFree=s.asm.TVMObjectFree).apply(null,arguments)},s._TVMBackendGetFuncFromEnv=function(){return(s._TVMBackendGetFuncFromEnv=s.asm.TVMBackendGetFuncFromEnv).apply(null,arguments)},s._TVMBackendAllocWorkspace=function(){return(s._TVMBackendAllocWorkspace=s.asm.TVMBackendAllocWorkspace).apply(null,arguments)},s._TVMBackendFreeWorkspace=function(){return(s._TVMBackendFreeWorkspace=s.asm.TVMBackendFreeWorkspace).apply(null,arguments)},s._TVMBackendRunOnce=function(){return(s._TVMBackendRunOnce=s.asm.TVMBackendRunOnce).apply(null,arguments)},s._TVMFuncFree=function(){return(s._TVMFuncFree=s.asm.TVMFuncFree).apply(null,arguments)},s._TVMByteArrayFree=function(){return(s._TVMByteArrayFree=s.asm.TVMByteArrayFree).apply(null,arguments)},s._TVMFuncCall=function(){return(s._TVMFuncCall=s.asm.TVMFuncCall).apply(null,arguments)},s._TVMCFuncSetReturn=function(){return(s._TVMCFuncSetReturn=s.asm.TVMCFuncSetReturn).apply(null,arguments)},s._TVMFuncCreateFromCFunc=function(){return(s._TVMFuncCreateFromCFunc=s.asm.TVMFuncCreateFromCFunc).apply(null,arguments)},s._TVMStreamCreate=function(){return(s._TVMStreamCreate=s.asm.TVMStreamCreate).apply(null,arguments)},s._TVMStreamFree=function(){return(s._TVMStreamFree=s.asm.TVMStreamFree).apply(null,arguments)},s._TVMSetStream=function(){return(s._TVMSetStream=s.asm.TVMSetStream).apply(null,arguments)},s._TVMSynchronize=function(){return(s._TVMSynchronize=s.asm.TVMSynchronize).apply(null,arguments)},s._TVMStreamStreamSynchronize=function(){return(s._TVMStreamStreamSynchronize=s.asm.TVMStreamStreamSynchronize).apply(null,arguments)},s._TVMCbArgToReturn=function(){return(s._TVMCbArgToReturn=s.asm.TVMCbArgToReturn).apply(null,arguments)},s._TVMDeviceAllocDataSpace=function(){return(s._TVMDeviceAllocDataSpace=s.asm.TVMDeviceAllocDataSpace).apply(null,arguments)},s._TVMDeviceAllocDataSpaceWithScope=function(){return(s._TVMDeviceAllocDataSpaceWithScope=s.asm.TVMDeviceAllocDataSpaceWithScope).apply(null,arguments)},s._TVMDeviceFreeDataSpace=function(){return(s._TVMDeviceFreeDataSpace=s.asm.TVMDeviceFreeDataSpace).apply(null,arguments)},s._TVMDeviceCopyDataFromTo=function(){return(s._TVMDeviceCopyDataFromTo=s.asm.TVMDeviceCopyDataFromTo).apply(null,arguments)},s.__ZN3tvm7runtime8Registry8RegisterERKNS0_6StringEb=function(){return(s.__ZN3tvm7runtime8Registry8RegisterERKNS0_6StringEb=s.asm._ZN3tvm7runtime8Registry8RegisterERKNS0_6StringEb).apply(null,arguments)},s._TVMBackendParallelLaunch=function(){return(s._TVMBackendParallelLaunch=s.asm.TVMBackendParallelLaunch).apply(null,arguments)},s._TVMBackendParallelBarrier=function(){return(s._TVMBackendParallelBarrier=s.asm.TVMBackendParallelBarrier).apply(null,arguments)},s.__ZN3tvm7runtime8Registry9ListNamesEv=function(){return(s.__ZN3tvm7runtime8Registry9ListNamesEv=s.asm._ZN3tvm7runtime8Registry9ListNamesEv).apply(null,arguments)},s.__ZN3tvm7runtime9BacktraceEv=function(){return(s.__ZN3tvm7runtime9BacktraceEv=s.asm._ZN3tvm7runtime9BacktraceEv).apply(null,arguments)},s.__ZN3tvm7runtime14RuntimeEnabledERKNS0_6StringE=function(){return(s.__ZN3tvm7runtime14RuntimeEnabledERKNS0_6StringE=s.asm._ZN3tvm7runtime14RuntimeEnabledERKNS0_6StringE).apply(null,arguments)},s.__ZN3tvm7runtime7NDArray10CreateViewENS0_10ShapeTupleE10DLDataType=function(){return(s.__ZN3tvm7runtime7NDArray10CreateViewENS0_10ShapeTupleE10DLDataType=s.asm._ZN3tvm7runtime7NDArray10CreateViewENS0_10ShapeTupleE10DLDataType).apply(null,arguments)},s.__ZNK3tvm7runtime7NDArray8ToDLPackEv=function(){return(s.__ZNK3tvm7runtime7NDArray8ToDLPackEv=s.asm._ZNK3tvm7runtime7NDArray8ToDLPackEv).apply(null,arguments)},s.__ZN3tvm7runtime7NDArray5EmptyENS0_10ShapeTupleE10DLDataType8DLDeviceNS0_8OptionalINS0_6StringEEE=function(){return(s.__ZN3tvm7runtime7NDArray5EmptyENS0_10ShapeTupleE10DLDataType8DLDeviceNS0_8OptionalINS0_6StringEEE=s.asm._ZN3tvm7runtime7NDArray5EmptyENS0_10ShapeTupleE10DLDataType8DLDeviceNS0_8OptionalINS0_6StringEEE).apply(null,arguments)},s.__ZN3tvm7runtime7NDArray20FromExternalDLTensorERK8DLTensor=function(){return(s.__ZN3tvm7runtime7NDArray20FromExternalDLTensorERK8DLTensor=s.asm._ZN3tvm7runtime7NDArray20FromExternalDLTensorERK8DLTensor).apply(null,arguments)},s.__ZN3tvm7runtime7NDArray9IsAlignedERK8DLTensor=function(){return(s.__ZN3tvm7runtime7NDArray9IsAlignedERK8DLTensor=s.asm._ZN3tvm7runtime7NDArray9IsAlignedERK8DLTensor).apply(null,arguments)},s.__ZN3tvm7runtime7NDArray15NewFromDLTensorEP8DLTensorRK8DLDevice=function(){return(s.__ZN3tvm7runtime7NDArray15NewFromDLTensorEP8DLTensorRK8DLDevice=s.asm._ZN3tvm7runtime7NDArray15NewFromDLTensorEP8DLTensorRK8DLDevice).apply(null,arguments)},s.__ZN3tvm7runtime7NDArray10FromDLPackEP15DLManagedTensor=function(){return(s.__ZN3tvm7runtime7NDArray10FromDLPackEP15DLManagedTensor=s.asm._ZN3tvm7runtime7NDArray10FromDLPackEP15DLManagedTensor).apply(null,arguments)},s.__ZNK3tvm7runtime7NDArray11CopyToBytesEPvm=function(){return(s.__ZNK3tvm7runtime7NDArray11CopyToBytesEPvm=s.asm._ZNK3tvm7runtime7NDArray11CopyToBytesEPvm).apply(null,arguments)},s.__ZN3tvm7runtime7NDArray13CopyFromBytesEPKvm=function(){return(s.__ZN3tvm7runtime7NDArray13CopyFromBytesEPKvm=s.asm._ZN3tvm7runtime7NDArray13CopyFromBytesEPKvm).apply(null,arguments)},s.__ZN3tvm7runtime7NDArray10CopyFromToEPK8DLTensorPS2_Pv=function(){return(s.__ZN3tvm7runtime7NDArray10CopyFromToEPK8DLTensorPS2_Pv=s.asm._ZN3tvm7runtime7NDArray10CopyFromToEPK8DLTensorPS2_Pv).apply(null,arguments)},s.__ZNK3tvm7runtime7NDArray5ShapeEv=function(){return(s.__ZNK3tvm7runtime7NDArray5ShapeEv=s.asm._ZNK3tvm7runtime7NDArray5ShapeEv).apply(null,arguments)},s.__ZNK3tvm7runtime7NDArray8DataTypeEv=function(){return(s.__ZNK3tvm7runtime7NDArray8DataTypeEv=s.asm._ZNK3tvm7runtime7NDArray8DataTypeEv).apply(null,arguments)},s.__ZN3tvm7runtime7NDArray28AbilityOfZeroCopyForDLTensorEP8DLTensorRK8DLDevice=function(){return(s.__ZN3tvm7runtime7NDArray28AbilityOfZeroCopyForDLTensorEP8DLTensorRK8DLDevice=s.asm._ZN3tvm7runtime7NDArray28AbilityOfZeroCopyForDLTensorEP8DLTensorRK8DLDevice).apply(null,arguments)},s._TVMArrayGetTypeIndex=function(){return(s._TVMArrayGetTypeIndex=s.asm.TVMArrayGetTypeIndex).apply(null,arguments)},s._TVMArrayAlloc=function(){return(s._TVMArrayAlloc=s.asm.TVMArrayAlloc).apply(null,arguments)},s._TVMArrayFree=function(){return(s._TVMArrayFree=s.asm.TVMArrayFree).apply(null,arguments)},s._TVMArrayCopyFromTo=function(){return(s._TVMArrayCopyFromTo=s.asm.TVMArrayCopyFromTo).apply(null,arguments)},s._TVMArrayFromDLPack=function(){return(s._TVMArrayFromDLPack=s.asm.TVMArrayFromDLPack).apply(null,arguments)},s._TVMArrayToDLPack=function(){return(s._TVMArrayToDLPack=s.asm.TVMArrayToDLPack).apply(null,arguments)},s._TVMDLManagedTensorCallDeleter=function(){return(s._TVMDLManagedTensorCallDeleter=s.asm.TVMDLManagedTensorCallDeleter).apply(null,arguments)},s._TVMArrayCopyFromBytes=function(){return(s._TVMArrayCopyFromBytes=s.asm.TVMArrayCopyFromBytes).apply(null,arguments)},s._TVMArrayCopyToBytes=function(){return(s._TVMArrayCopyToBytes=s.asm.TVMArrayCopyToBytes).apply(null,arguments)},s._TVMObjectGetTypeIndex=function(){return(s._TVMObjectGetTypeIndex=s.asm.TVMObjectGetTypeIndex).apply(null,arguments)},s._TVMObjectRetain=function(){return(s._TVMObjectRetain=s.asm.TVMObjectRetain).apply(null,arguments)},s._TVMObjectDerivedFrom=function(){return(s._TVMObjectDerivedFrom=s.asm.TVMObjectDerivedFrom).apply(null,arguments)},s._TVMObjectTypeKey2Index=function(){return(s._TVMObjectTypeKey2Index=s.asm.TVMObjectTypeKey2Index).apply(null,arguments)},s._TVMObjectTypeIndex2Key=function(){return(s._TVMObjectTypeIndex2Key=s.asm.TVMObjectTypeIndex2Key).apply(null,arguments)},s.__ZN3tvm7runtime5Timer5StartE8DLDevice=function(){return(s.__ZN3tvm7runtime5Timer5StartE8DLDevice=s.asm._ZN3tvm7runtime5Timer5StartE8DLDevice).apply(null,arguments)},s.__ZN3tvm7runtime8Registry8set_bodyENS0_10PackedFuncE=function(){return(s.__ZN3tvm7runtime8Registry8set_bodyENS0_10PackedFuncE=s.asm._ZN3tvm7runtime8Registry8set_bodyENS0_10PackedFuncE).apply(null,arguments)},s.__ZN3tvm7runtime8Registry6RemoveERKNS0_6StringE=function(){return(s.__ZN3tvm7runtime8Registry6RemoveERKNS0_6StringE=s.asm._ZN3tvm7runtime8Registry6RemoveERKNS0_6StringE).apply(null,arguments)},s.__ZN3tvm7runtime15EnvCheckSignalsEv=function(){return(s.__ZN3tvm7runtime15EnvCheckSignalsEv=s.asm._ZN3tvm7runtime15EnvCheckSignalsEv).apply(null,arguments)},s._TVMFuncRegisterGlobal=function(){return(s._TVMFuncRegisterGlobal=s.asm.TVMFuncRegisterGlobal).apply(null,arguments)},s._TVMFuncGetGlobal=function(){return(s._TVMFuncGetGlobal=s.asm.TVMFuncGetGlobal).apply(null,arguments)},s._TVMFuncListGlobalNames=function(){return(s._TVMFuncListGlobalNames=s.asm.TVMFuncListGlobalNames).apply(null,arguments)},s._TVMFuncRemoveGlobal=function(){return(s._TVMFuncRemoveGlobal=s.asm.TVMFuncRemoveGlobal).apply(null,arguments)},s._TVMBackendRegisterEnvCAPI=function(){return(s._TVMBackendRegisterEnvCAPI=s.asm.TVMBackendRegisterEnvCAPI).apply(null,arguments)},s._TVMBackendRegisterSystemLibSymbol=function(){return(s._TVMBackendRegisterSystemLibSymbol=s.asm.TVMBackendRegisterSystemLibSymbol).apply(null,arguments)},s._TVMBackendAnyListSetPackedArg=function(){return(s._TVMBackendAnyListSetPackedArg=s.asm.TVMBackendAnyListSetPackedArg).apply(null,arguments)},s._TVMBackendAnyListResetItem=function(){return(s._TVMBackendAnyListResetItem=s.asm.TVMBackendAnyListResetItem).apply(null,arguments)},s._TVMBackendAnyListMoveFromPackedReturn=function(){return(s._TVMBackendAnyListMoveFromPackedReturn=s.asm.TVMBackendAnyListMoveFromPackedReturn).apply(null,arguments)},s.__ZN3tvm7runtime6detail12LogFatalImplERKNSt3__212basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEiSA_=function(){return(s.__ZN3tvm7runtime6detail12LogFatalImplERKNSt3__212basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEiSA_=s.asm._ZN3tvm7runtime6detail12LogFatalImplERKNSt3__212basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEiSA_).apply(null,arguments)},s.__ZN3tvm7runtime6detail14LogMessageImplERKNSt3__212basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEiiSA_=function(){return(s.__ZN3tvm7runtime6detail14LogMessageImplERKNSt3__212basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEiiSA_=s.asm._ZN3tvm7runtime6detail14LogMessageImplERKNSt3__212basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEiiSA_).apply(null,arguments)},s._TVMWasmAllocSpace=function(){return(s._TVMWasmAllocSpace=s.asm.TVMWasmAllocSpace).apply(null,arguments)},s._TVMWasmFreeSpace=function(){return(s._TVMWasmFreeSpace=s.asm.TVMWasmFreeSpace).apply(null,arguments)},s._TVMWasmFuncCreateFromCFunc=function(){return(s._TVMWasmFuncCreateFromCFunc=s.asm.TVMWasmFuncCreateFromCFunc).apply(null,arguments)};var AG=s.__initialize=function(){return(AG=s.__initialize=s.asm._initialize).apply(null,arguments)};function AY(A=y){!(T>0)&&(function(){if(s.preRun)for("function"==typeof s.preRun&&(s.preRun=[s.preRun]);s.preRun.length;){var A;A=s.preRun.shift(),p.unshift(A)}$(p)}(),T>0||(s.setStatus?(s.setStatus("Running..."),setTimeout(function(){setTimeout(function(){s.setStatus("")},1),Q()},1)):Q()));function Q(){!U&&(U=!0,s.calledRun=!0,e||(s.noFSInit||AC.init.initialized||AC.init(),AC.ignorePermissions=!1,$(d),$(z),s.onRuntimeInitialized&&s.onRuntimeInitialized(),AM&&function(A=[]){var Q=AG;try{Q(),E=0,Ao(0)}catch(A){return function(A){if(A instanceof _||"unwind"==A)return E;L(1,A)}(A)}}(A),function(){if(s.postRun)for("function"==typeof s.postRun&&(s.postRun=[s.postRun]);s.postRun.length;){var A;A=s.postRun.shift(),x.unshift(A)}$(x)}()))}}if(P=function A(){U||AY(),U||(P=A)},s.preInit)for("function"==typeof s.preInit&&(s.preInit=[s.preInit]);s.preInit.length>0;)s.preInit.pop()();var AM=!0;s.noInitialRun&&(AM=!1),AY(),this.Module=s,this.start=s.wasmLibraryProvider.start,this.imports=s.wasmLibraryProvider.imports,this.wasiImport=this.imports.wasi_snapshot_preview1}/**
- * Get performance measurement.
- */function c(){if("undefined"!=typeof performance)return performance;{// eslint-disable-next-line @typescript-eslint/no-var-requires
-let A=i&&i.__esModule?i.default:i;return A.performance}}/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied. See the License for the
- * specific language governing permissions and limitations
- * under the License.
- *//**
- * @internal
- * FFI Library wrapper, maintains most runtime states.
- */class J{constructor(A,Q){this.recycledCallStacks=[],this.wasmInstance=A,this.memory=new M(this.detectWasmMemory(this.wasmInstance,Q)),h(void 0!==this.wasmInstance.exports,"Expect the library module contains exports"),this.exports=this.wasmInstance.exports,this.wasm32=this.memory.wasm32,this.validateInstance()}dispose(){for(var A;0!=this.recycledCallStacks.length;)this.recycledCallStacks.pop().dispose();null===(A=this.webGPUContext)||void 0===A||A.dispose()}sizeofPtr(){return this.memory.sizeofPtr()}checkCall(A){if(0!=A){let A=this.exports.TVMGetLastError();throw Error("TVMError: "+this.memory.loadCString(A))}}getOrAllocCallStack(){return 0!=this.recycledCallStacks.length?this.recycledCallStacks.pop():new U(this.memory,this.exports.TVMWasmAllocSpace,this.exports.TVMWasmFreeSpace)}recycleCallStack(A){A.reset(),this.recycledCallStacks.push(A)}validateInstance(){this.checkExports(["TVMWasmAllocSpace","TVMWasmFreeSpace","TVMFuncFree"])}checkExports(A){let Q=[];for(let B of A){let A=this.exports[B];A instanceof Function||Q.push(B)}if(0!=Q.length)throw Error("Cannot find "+Q+" in exports")}detectWasmMemory(A,Q){if(A.exports.memory instanceof WebAssembly.Memory)return A.exports.memory;if(Q.env&&Q.env.memory instanceof WebAssembly.Memory)return Q.env.memory;throw Error("Cannt detect wasm memory from imports "+Q+" or exports"+A.exports)}}/**
- * @internal
- * Manages extra runtime context for the runtime.
- */class a{constructor(A){this.autoDisposeScope=[],this.arrayGetItem=A("runtime.ArrayGetItem"),this.arrayGetSize=A("runtime.ArraySize"),this.arrayMake=A("runtime.Array"),this.stringMake=A("runtime.String"),this.getFFIString=A("runtime.GetFFIString"),this.getSysLib=A("runtime.SystemLib"),this.arrayCacheGet=A("vm.builtin.ndarray_cache.get"),this.arrayCacheRemove=A("vm.builtin.ndarray_cache.remove"),this.arrayCacheUpdate=A("vm.builtin.ndarray_cache.update"),this.arrayCacheClear=A("vm.builtin.ndarray_cache.clear"),this.arrayDecodeStorage=A("tvmjs.array.decode_storage"),this.paramModuleFromCache=A("vm.builtin.param_module_from_cache"),this.makeShapeTuple=A("runtime.ShapeTuple"),this.ndarrayCreateView=A("runtime.TVMArrayCreateView"),this.sampleTopPFromLogits=A("vm.builtin.sample_top_p_from_logits"),this.applyRepetitionPenalty=A("vm.builtin.apply_repetition_penalty"),this.applySoftmaxWithTemperature=A("vm.builtin.apply_softmax_with_temperature")}dispose(){// call array cache clear to clear all cached items
-this.arrayCacheClear.dispose(),this.arrayGetItem.dispose(),this.arrayGetSize.dispose(),this.arrayMake.dispose(),this.stringMake.dispose(),this.getFFIString.dispose(),this.arrayCacheGet.dispose(),this.arrayCacheRemove.dispose(),this.arrayCacheUpdate.dispose(),this.arrayCacheClear.dispose(),this.arrayDecodeStorage.dispose(),this.paramModuleFromCache.dispose(),this.makeShapeTuple.dispose(),this.ndarrayCreateView.dispose(),this.sampleTopPFromLogits.dispose(),this.applyRepetitionPenalty.dispose(),this.applySoftmaxWithTemperature.dispose()}beginScope(){this.autoDisposeScope.push([])}endScope(){if(0===this.autoDisposeScope.length)throw Error("tvm.endScope called when the stack is empty.");// automatically dispose all the tracked values in the current scope.
-let A=this.autoDisposeScope.pop();for(let Q=0;Q1)throw Error("Value attached to scope multiple times");return A}}/**
- * A typed scalar constant used to represent a typed number
- * argument to PackedFunc calls.
- */class H{constructor(A,Q){this.value=A,this.dtype=Q}}/**
- * Cell holds the PackedFunc object.
- */class S{constructor(A,Q){this.handle=A,this.lib=Q}dispose(){0!=this.handle&&(this.lib.checkCall(this.lib.exports.TVMFuncFree(this.handle)),this.handle=0)}getHandle(A=!0){if(A&&0===this.handle)throw Error("PackedFunc has already been disposed");return this.handle}}let t={1:"cpu",2:"cuda",4:"opencl",8:"metal",15:"webgpu"},O={cpu:1,cuda:2,cl:4,opencl:4,vulkan:7,metal:8,webgpu:15};/**
- * Represent a runtime context where a NDArray can reside.
- */class f{constructor(A,Q,B){let g=typeof A;if("string"===g){if(this.deviceType=O[A],void 0===this.deviceType)throw Error("Cannot recogonize deviceType "+A)}else if("number"===g)this.deviceType=A;else throw Error("Cannot take type "+g+" as deviceType");this.deviceId=Q,this.lib=B}/**
- * Synchronize the device
- */sync(){return o(this,void 0,void 0,function*(){this.deviceType===O.webgpu&&(h(void 0!==this.lib.webGPUContext),yield this.lib.webGPUContext.sync())})}toString(){return t[this.deviceType]+"("+this.deviceId.toString()+")"}}(C=Q||(Q={}))[C.Int=0]="Int",C[C.UInt=1]="UInt",C[C.Float=2]="Float",C[C.OpaqueHandle=3]="OpaqueHandle";let e={0:"int",1:"uint",2:"float",3:"handle"};/**
- * Runtime data type of NDArray.
- */class Z{constructor(A,Q,B){this.code=A,this.bits=Q,this.lanes=B}toString(){let A=e[this.code]+this.bits.toString();return 1!=this.lanes?A+"x"+this.lanes.toString():A}numStorageBytes(){return this.bits*this.lanes+7>>3}}/**
- * n-dimnesional array.
- */class n{constructor(A,Q,B,g){this.handle=A,this.isView=Q,this.lib=B,this.ctx=g,this.isView?this.dltensor=A:this.dltensor=this.getDLTensorFromArrayHandle(this.handle);let I=0+this.lib.sizeofPtr(),C=I+4/* SizeOf.I32 */,E=I+8/* SizeOf.DLDevice */,D=E+4/* SizeOf.I32 */,w=D+1/* SizeOf.U8 */,o=w+1/* SizeOf.U8 */,N=D+4/* SizeOf.DLDataType */,i=N+this.lib.sizeofPtr(),G=i+this.lib.sizeofPtr();// dataPtr
-this.dataPtr=B.memory.loadPointer(this.dltensor),// ndim
-this.ndim=B.memory.loadI32(this.dltensor+E);// shape
-let Y=B.memory.loadPointer(this.dltensor+N);this.shape=[];for(let A=0;Anew H(A,"int"));return this.ctx.ndarrayCreateView(this,this.ctx.makeShapeTuple(...Q))}/**
- * Get handle of ndarray, check it is not null.
- *
- * @param requireNotNull require handle is not null.
- * @returns The handle.
- */getHandle(A=!0){if(A&&0===this.handle)throw Error("NDArray has already been disposed");return this.handle}/**
- * Get dataPtr of NDarray
- *
- * @returns The handle.
- */getDataPtr(){if(0===this.handle)throw Error("NDArray has already been disposed");return this.dataPtr}dispose(){0==this.handle||this.isView||(this.lib.checkCall(this.lib.exports.TVMArrayFree(this.handle)),this.handle=0)}/**
- * Copy data from another NDArray or javascript array.
- * The number of elements must match.
- *
- * @param data The source data array.
- * @returns this
- */copyFrom(A){if(A instanceof n)return this.lib.checkCall(this.lib.exports.TVMArrayCopyFromTo(A.getHandle(),this.getHandle(),0)),this;{let Q;let B=this.shape.reduce((A,Q)=>A*Q,1);if(A.length!=B)throw Error("data size and shape mismatch data.length"+A.length+" vs "+B);if("float32"===this.dtype)Q=Float32Array.from(A).buffer;else if("float64"===this.dtype)Q=Float64Array.from(A).buffer;else if("int32"===this.dtype)Q=Int32Array.from(A).buffer;else if("int8"===this.dtype)Q=Int8Array.from(A).buffer;else if("uint8"===this.dtype)Q=Uint8Array.from(A).buffer;else throw Error("Unsupported data type "+this.dtype);return this.copyFromRawBytes(new Uint8Array(Q))}}/**
- * Copy data from raw bytes.
- * @param data Uint8Array of bytes.
- * @returns this
- */copyFromRawBytes(A){var Q;// short cut for gpu copy
-if(this.device.deviceType===O.webgpu)return null===(Q=this.lib.webGPUContext)||void 0===Q||Q.copyRawBytesToBuffer(A,this.getDataPtr(),0,A.length),this;// CPU copy
-let B=this.shape.reduce((A,Q)=>A*Q,1),g=this.dlDataType.numStorageBytes()*B;if(g!=A.length)throw Error("Expect the data's length equals nbytes="+g);let I=this.lib.getOrAllocCallStack(),C=I.allocRawBytes(g),E=I.ptrFromOffset(C);return this.lib.memory.storeRawBytes(E,A),this.lib.checkCall(this.lib.exports.TVMArrayCopyFromBytes(this.getHandle(),E,g)),this.lib.recycleCallStack(I),this}/**
- * Return a copied Uint8Array of the raw bytes in the NDArray.
- * @returns The result array.
- */toRawBytes(){if(this.device.deviceType!=O.cpu)throw Error("Can only sync copy CPU array, use cpu_arr.copyfrom(gpu_arr) then sync instead.");let A=this.shape.reduce((A,Q)=>A*Q,1),Q=this.dlDataType.numStorageBytes()*A,B=this.lib.getOrAllocCallStack(),g=B.allocRawBytes(Q),I=B.ptrFromOffset(g);this.lib.checkCall(this.lib.exports.TVMArrayCopyToBytes(this.getHandle(),I,Q));let C=this.lib.memory.loadRawBytes(I,Q);return this.lib.recycleCallStack(B),C}/**
- * Return a TypedArray copy of the NDArray, the specific type depends on
- * the dtype of the NDArray.
- * @returns The result array.
- */toArray(){let A=this.dtype;if("float32"===A)return new Float32Array(this.toRawBytes().buffer);if("float64"===A)return new Float64Array(this.toRawBytes().buffer);if("int32"===A)return new Int32Array(this.toRawBytes().buffer);if("int8"===A)return new Int8Array(this.toRawBytes().buffer);if("uint8"===A)return new Uint8Array(this.toRawBytes().buffer);throw Error("Unsupported data type "+this.dtype)}getDLTensorFromArrayHandle(A){// Note: this depends on the NDArray C ABI.
-// keep this function in case of ABI change.
-return A}}/**
- * Runtime Module.
- */class m{constructor(A,Q,B){this.handle=A,this.lib=Q,this.makePackedFunc=B}dispose(){0!=this.handle&&(this.lib.checkCall(this.lib.exports.TVMModFree(this.handle)),this.handle=0)}/**
- * Get handle of module, check it is not null.
- *
- * @param requireNotNull require handle is not null.
- * @returns The handle.
- */getHandle(A=!0){if(A&&0===this.handle)throw Error("Module has already been disposed");return this.handle}/**
- * Get a function in the module.
- * @param name The name of the function.
- * @param queryImports Whether to also query imports
- * @returns The result function.
- */getFunction(A,Q=!0){if(0===this.handle)throw Error("Module has already been disposed");let B=this.lib.getOrAllocCallStack(),g=B.allocRawBytes(A.length+1);B.storeRawBytes(g,N(A));let I=B.allocPtrArray(1),C=B.ptrFromOffset(I);B.commitToWasmMemory(I),this.lib.checkCall(this.lib.exports.TVMModGetFunction(this.getHandle(),B.ptrFromOffset(g),Q?1:0,C));let E=this.lib.memory.loadPointer(C);if(this.lib.recycleCallStack(B),0===E)throw Error("Cannot find function "+A);let D=this.makePackedFunc(E);return D}/**
- * Import another module into the current runtime module.
- * @param mod The module to be imported.
- */importModule(A){this.lib.checkCall(this.lib.exports.TVMModImport(this.getHandle(),A.getHandle()))}}/**
- * Generic object base
- */class j{constructor(A,Q,B){this.handle=A,this.lib=Q,this.ctx=B}dispose(){0!=this.handle&&(this.lib.checkCall(this.lib.exports.TVMObjectFree(this.handle)),this.handle=0)}/**
- * Get handle of module, check it is not null.
- *
- * @param requireNotNull require handle is not null.
- * @returns The handle.
- */getHandle(A=!0){if(A&&0===this.handle)throw Error("Module has already been disposed");return this.handle}/** get the type index of the object */typeIndex(){if(0===this.handle)throw Error("The current Object has already been disposed");let A=this.lib.getOrAllocCallStack(),Q=A.allocPtrArray(1),B=A.ptrFromOffset(Q);this.lib.checkCall(this.lib.exports.TVMObjectGetTypeIndex(this.getHandle(),B));let g=this.lib.memory.loadU32(B);return this.lib.recycleCallStack(A),g}/** get the type key of the object */typeKey(){let A=this.typeIndex(),Q=this.lib.getOrAllocCallStack(),B=Q.allocPtrArray(1),g=Q.ptrFromOffset(B);this.lib.checkCall(this.lib.exports.TVMObjectTypeIndex2Key(A,g));let I=this.lib.memory.loadCString(this.lib.memory.loadPointer(g));return this.lib.recycleCallStack(Q),I}}/** Runtime array object. */class q extends j{constructor(A,Q,B){super(A,Q,B)}/**
- * @returns the size of the array.
- */size(){return this.ctx.arrayGetSize(this)}/**
- * Get index-th element of the array
- * @param index the array index.
- * @returns The element.
- */get(A){return this.ctx.arrayGetItem(this,new H(A,"int32"))}}/** Runtime string object. */class p extends j{constructor(A,Q,B){super(A,Q,B)}/**
- * @returns the size of the array.
- */toString(){return this.ctx.getFFIString(this)}}(E=B||(B={}))[E.NAIVE_ALLOCATOR=1]="NAIVE_ALLOCATOR",E[E.POOLED_ALLOCATOR=2]="POOLED_ALLOCATOR";/**
- * VirtualMachine Executor.
- *
- * This is a thin wrapper of the underlying TVM module.
- * you can also directly call set_input, run, and get_output
- * of underlying module functions
- */class d{/**
- * Constructor
- * @param mod The underlying module, need to be detached.
- * @param device The main device ro run VM on.
- */constructor(A,Q){this.mod=A,this.mod.getFunction("vm_initialization")(new H(Q.deviceType,"int"),new H(Q.deviceId,"int"),new H(B.POOLED_ALLOCATOR,"int"),new H(O.cpu,"int"),new H(0,"int"),new H(B.POOLED_ALLOCATOR,"int"))}dispose(){this.mod.dispose()}/**
- * Get a function in the VM module.
- * @param name The name of the function.
- * @returns The result function.
- */getFunction(A){return this.mod.getFunction(A)}/**
- * Get the internal module.
- */getInternalModule(){return this.mod}}(D=g||(g={}))[D.kReturn=4]="kReturn",D[D.kException=5]="kException";/**
- * Cache to store model related data.
- */class z{constructor(A){this.scope=A}fetchWithCache(A){return o(this,void 0,void 0,function*(){let Q=new Request(A);void 0===this.cache&&(this.cache=yield caches.open(this.scope));let B=yield this.cache.match(Q);if(void 0===B&&(yield this.cache.add(Q),B=yield this.cache.match(Q)),void 0===B)throw Error("Cannot fetch "+A);return B})}hasAllKeys(A){return o(this,void 0,void 0,function*(){return void 0===this.cache&&(this.cache=yield caches.open(this.scope)),this.cache.keys().then(A=>A.map(A=>A.url)).then(Q=>A.every(A=>-1!==Q.indexOf(A))).catch(A=>!1)})}}/**
- * TVM runtime instance.
- *
- * All objects(NDArray, Module, PackedFunc) returned by TVM runtim function call
- * and PackedFunc instance are tracked through a scope mechanism that will get
- * auto-released when we call EndScope.
- *
- * This is necessarily to be able to release the underlying WASM and WebGPU memory that
- * are not tracked through JS native garbage collection mechanism.
- *
- * This does mean that we have to get familar with the following functions:
- * - {@link beginScope}
- * - {@link endScope}
- * - {@link withNewScope}
- * - {@link attachToCurrentScope}
- * - {@link detachFromCurrentScope}
- */class x{/**
- * Constructor
- *
- * importObject can also be a {@link LibraryProvider} object,
- * a WASI object, or an object containing wasmLibraryProvider field.
- *
- * @param wasmModule The input module or instance.
- * @param importObject The imports to initialize the wasmInstance if it is not provided.
- * @param wasmInstance Additional wasm instance argument for deferred construction.
- * @param env Directly specified environment module.
- *
- * @see Please use the async version {@link instantiate} when targeting browsers.
- */constructor(A,Q={},B,g){this.cacheMetadata={},this.initProgressCallback=[],B instanceof WebAssembly.Instance?h(g instanceof s,"env must be provided when passing in instance"):(h(void 0===g),g=new s(Q),B=new WebAssembly.Instance(A,g.imports)),g.start(B),this.env=g,this.lib=new J(B,g.imports),this.memory=this.lib.memory,this.exports=this.lib.exports,this.objFactory=new Map,this.ctx=new a(A=>this.getGlobalFuncInternal(A,!1)),this.registerEnvGlobalPackedFuncs(),this.registerObjectFactoryFuncs()}/**
- * Benchmark stable execution of the run function.
- *
- * @params run The run function
- * @params dev The device to sync during each run.
- * @number The number of times to compute the average.
- * @repeat The number of times to repeat the run.
- */benchmark(A,Q,B=10,g=1){return o(this,void 0,void 0,function*(){// Skip first run as it can involve GPU warmup and module loading time.
-let I=c(),C=[];// run with new scope
-this.withNewScope(A),yield Q.sync();for(let E=0;E{// packed func can be released once it is registered
-let g=this.toPackedFuncInternal(Q,!0),I=this.lib.getOrAllocCallStack(),C=I.allocRawBytes(A.length+1);I.storeRawBytes(C,N(A)),I.commitToWasmMemory(),this.lib.checkCall(this.lib.exports.TVMFuncRegisterGlobal(I.ptrFromOffset(C),g._tvmPackedCell.getHandle(),B?1:0)),this.lib.recycleCallStack(I)})}/**
- * Get global PackedFunc from the runtime.
- * @param name The name of the function.
- * @param autoAttachToScope Whether to track it via autoDispose
- * @returns The result function.
- */getGlobalFunc(A){return this.getGlobalFuncInternal(A,!0)}getGlobalFuncInternal(A,Q=!0){let B=this.lib.getOrAllocCallStack(),g=B.allocRawBytes(A.length+1);B.storeRawBytes(g,N(A));let I=B.allocPtrArray(1),C=B.ptrFromOffset(I);B.commitToWasmMemory(I),this.lib.checkCall(this.exports.TVMFuncGetGlobal(B.ptrFromOffset(g),C));let E=this.memory.loadPointer(C);if(this.lib.recycleCallStack(B),0===E)throw Error("Cannot find global function "+A);let D=this.makePackedFunc(E);return Q&&this.ctx.attachToCurrentScope(D),D}/**
- * Check if func is PackedFunc.
- *
- * @param func The input.
- * @returns The check result.
- */isPackedFunc(A){// eslint-disable-next-line no-prototype-builtins
-return"function"==typeof A&&A.hasOwnProperty("_tvmPackedCell")}/**
- * Convert func to PackedFunc
- *
- * @param func Input function.
- * @returns The converted function.
- */toPackedFunc(A){return this.toPackedFuncInternal(A,!0)}toPackedFuncInternal(A,Q){if(this.isPackedFunc(A))return A;let B=this.createPackedFuncFromCFunc(this.wrapJSFuncAsPackedCFunc(A));return Q?this.ctx.attachToCurrentScope(B):B}/**
- * Setup a virtual machine module with given device.
- *
- * @param dev DLDevice the device.
- * @returns The created virtual machime.
- */createVirtualMachine(A){let Q=this.ctx.detachFromCurrentScope(this.systemLib().getFunction("vm_load_executable")());return this.ctx.attachToCurrentScope(new d(Q,A))}//-----------------------------------------------
-// Native NDArray Cache Support
-//-----------------------------------------------
-/**
- * Register a call back for fetch progress.
- *
- * @param cb the fetch progress callback.
- */registerInitProgressCallback(A){this.initProgressCallback.push(A)}/**
- * Get parameters in the form of prefix_i
- *
- * @param prefix The parameter prefix.
- * @param numParams Number of parameters.
- * @returns
- */getParamsFromCache(A,Q){return this.ctx.paramModuleFromCache(A,new H(Q,"int32")).getFunction("get_params")()}/**
- * Get NDArray from cache.
- * @param name The name of array.
- * @returns The result.
- */ndarrayCacheGet(A){return this.ctx.arrayCacheGet(A)}/**
- * Get NDArray from cache.
- * @param name The name of array.
- * @returns The result.
- */ndarrayCacheRemove(A){return this.ctx.arrayCacheRemove(A)}/**
- * Update the ndarray cache.
- * @param name The name of the array.
- * @param arr The content.
- */ndarrayCacheUpdate(A,Q,B=!1){this.ctx.arrayCacheUpdate(A,Q,this.scalar(B?1:0,"int32"))}/**
- * Update the ndarray cache.
- * @param name The name of the array.
- * @param arr The content.
- */ndarrayCacheClear(){this.ctx.arrayCacheClear()}/**
- * Fetch NDArray cache from url.
- *
- * @param ndarrayCacheUrl The cache url.
- * @param device The device to be fetched to.
- * @param cacheScope The scope identifier of the cache
- * @returns The meta data
- */fetchNDArrayCache(A,Q,B="tvmjs"){return o(this,void 0,void 0,function*(){let g;let I=new z(B),C=new URL("ndarray-cache.json",A).href,E=yield I.fetchWithCache(C);E instanceof Response&&(g=yield E.json()),yield this.fetchNDArrayCacheInternal(A,g.records,Q,I),this.cacheMetadata=Object.assign(Object.assign({},this.cacheMetadata),g.metadata)})}/**
- * Fetch list of NDArray into the NDArrayCache.
- *
- * @param ndarrayCacheUrl The cache url.
- * @param list The list of array data.
- * @param device The device to store the data to.
- * @param artifactCache The artifact cache
- */fetchNDArrayCacheInternal(A,Q,B,g){return o(this,void 0,void 0,function*(){let I=c(),C=I.now(),E=0;for(let A=0;Anew URL(Q.dataPath,A).href)),N=A=>{// report
-for(let B=0;B=1){let A=parseInt(E[0]);A+""===E[0]&&(I=A)}return E.length>=2&&(C=parseInt(E[1])),new Z(g,I,C)}throw Error("Unknown dtype "+A)}/**
- * Create a new {@link Scalar} that can be passed to a PackedFunc.
- * @param value The number value.
- * @param dtype The dtype string.
- * @returns The created scalar.
- */scalar(A,Q){return new H(A,Q)}/**
- * Create a new {@link DLDevice}
- * @param deviceType The device type.
- * @param deviceId The device index.
- * @returns The created device.
- */device(A,Q=0){return new f(A,Q,this.lib)}/**
- * Create a new cpu {@link DLDevice}
- * @param deviceId The device index.
- */cpu(A=0){return this.device("cpu",A)}/**
- * Create a new webgpu {@link DLDevice}
- * @param deviceId The device index.
- */webgpu(A=0){return this.device("webgpu",A)}/**
- * Create an empty {@link NDArray} with given shape and dtype.
- *
- * @param shape The shape of the array.
- * @param dtype The data type of the array.
- * @param dev The device of the ndarray.
- * @returns The created ndarray.
- */empty(A,Q="float32",B=this.device("cpu",0)){Q=this.toDLDataType(Q),A="number"==typeof A?[A]:A;let g=this.lib.getOrAllocCallStack(),I=g.allocRawBytes(8/* SizeOf.I64 */*A.length);for(let Q=0;QA*Q,1),E=B-Q,D=new Float32Array(C);for(let A=0;Anew H(A,"int"));return this.ctx.makeShapeTuple(...Q)}/**
- * Get type index from type key.
- * @param typeKey The type key.
- * @returns The corresponding type index.
- */typeKey2Index(A){let Q=this.lib.getOrAllocCallStack(),B=Q.allocRawBytes(A.length+1);Q.storeRawBytes(B,N(A));let g=Q.allocPtrArray(1),I=Q.ptrFromOffset(g);Q.commitToWasmMemory(g),this.lib.checkCall(this.lib.exports.TVMObjectTypeKey2Index(Q.ptrFromOffset(B),I));let C=this.memory.loadU32(I);return this.lib.recycleCallStack(Q),C}/**
- * Register an object constructor.
- * @param typeKey The name of the function.
- * @param func function to be registered.
- * @param override Whether overwrite function in existing registry.
- */registerObjectConstructor(A,Q,B=!1){let g=this.typeKey2Index(A);if(this.objFactory.has(g)&&!B)throw Error("Type "+A+" already registered");this.objFactory.set(g,Q)}/**
- * Register an asyncfunction to be global function in the server.
- * @param name The name of the function.
- * @param func function to be registered.
- * @param override Whether overwrite function in existing registry.
- *
- * @note The async function will only be used for serving remote calls in the rpc.
- */registerAsyncServerFunc(A,Q,B=!1){this.registerFunc("__async."+A,(...A)=>{let B=A.slice(0,A.length-1),I=this.detachFromCurrentScope(A[A.length-1]),C=Q(...B);C.then(A=>{I(this.scalar(g.kReturn,"int32"),A),I.dispose()})},B)}/**
- * Asynchronously load webgpu pipelines when possible.
- * @param mod The input module.
- */asyncLoadWebGPUPipelines(A){return o(this,void 0,void 0,function*(){if(void 0===this.lib.webGPUContext)throw Error("WebGPU not initialied");let Q=this.lib.webGPUContext;this.beginScope();let B=A.getFunction("webgpu.get_fmap",!0)(),g=JSON.parse(B),I=this.detachFromCurrentScope(A.getFunction("webgpu.get_shader")),C=this.detachFromCurrentScope(A.getFunction("webgpu.update_prebuild"));this.endScope();let E=c(),D=E.now(),w=D,o=0,N=Object.entries(g),i=Promise.resolve();for(let[A,B]of N){let g=I(A);h(A===B.name);let G=Q.createShaderAsync(B,g).then(Q=>{this.beginScope(),C(A,Q),this.endScope()}).then(()=>{o+=1;let A=E.now();// skip report if gap is smaller than 1000
-if(A-w<1e3&&o!=N.length)return;w=A;let Q=Math.ceil((E.now()-D)/1e3);// report
-for(let A=0;A{})}yield i,h(o===N.length)})}/**
- * Initialize webgpu in the runtime.
- * @param device The given GPU device.
- */initWebGPU(A){let Q=new K(this.memory,A);this.registerFunc("wasm.WebGPUDeviceAPI",A=>Q.getDeviceAPI(A)),this.registerFunc("wasm.WebGPUCreateShader",(A,B)=>{let g=JSON.parse(A);return Q.createShader(g,B)}),this.registerAsyncServerFunc("wasm.WebGPUWaitForTasks",()=>o(this,void 0,void 0,function*(){yield Q.sync()})),this.lib.webGPUContext=Q}/** Register all object factory */registerObjectFactoryFuncs(){this.registerObjectConstructor("Array",(A,Q,B)=>new q(A,Q,B)),this.registerObjectConstructor("runtime.String",(A,Q,B)=>new p(A,Q,B))}/** Register global packed functions needed by the backend to the env. */registerEnvGlobalPackedFuncs(){// Register the timer function to enable the time_evaluator.
-let A=c();this.registerAsyncServerFunc("wasm.TimeExecution",(Q,B,g,I,C,E,D,w)=>o(this,void 0,void 0,function*(){// detach and explicit dispose when tasks is fullfilled
- // the promise will immediately return and we need to makesure
- // finvoke do not get recycled.
- this.ctx.detachFromCurrentScope(Q),Q(this.scalar(1,"int32")),yield B.sync();let o=[],N=g;for(let g=0;g0&&(N=Math.floor(Math.max(C/(I/N)+1,1.618*N)));let g=A.now();Q(this.scalar(N,"int32")),yield B.sync();let E=A.now();0==(I=E-g)&&i++}while(I0&&g%w==0&&(yield new Promise(A=>setTimeout(A,D)))}let i=new Float64Array(o.length);return i.set(o),// dispose finvoke
- Q.dispose(),new Uint8Array(i.buffer)})),this.registerAsyncServerFunc("testing.asyncAddOne",A=>o(this,void 0,void 0,function*(){return yield new Promise(A=>setTimeout(A,100)),A+1}))}createPackedFuncFromCFunc(A){let Q=this.env.packedCFuncTable.length;0!=this.env.packedCFuncTableFreeId.length?Q=this.env.packedCFuncTableFreeId.pop():this.env.packedCFuncTable.push(void 0),this.env.packedCFuncTable[Q]=A;let B=this.lib.getOrAllocCallStack(),g=B.allocPtrArray(1),I=B.ptrFromOffset(g);this.lib.checkCall(this.exports.TVMWasmFuncCreateFromCFunc(Q,I));let C=this.makePackedFunc(this.memory.loadPointer(I));return this.lib.recycleCallStack(B),C}/**
- * Set packed function arguments into the location indicated by argsValue and argsCode.
- * Allocate new temporary space from the stack if necessary.
- *
- * @parma stack The call stack
- * @param args The input arguments.
- * @param argsValue The offset of argsValue.
- * @param argsCode The offset of argsCode.
- */setPackedArguments(A,Q,B,g){for(let I=0;I{let D=[];// use scope to track js values.
-this.ctx.beginScope();for(let A=0;A{let B=this.lib.getOrAllocCallStack(),g=B.allocRawBytes(8/* SizeOf.TVMValue */*A.length),I=B.allocRawBytes(4/* SizeOf.I32 */*A.length);this.setPackedArguments(B,A,g,I);let C=B.allocRawBytes(8/* SizeOf.TVMValue */),E=B.allocRawBytes(4/* SizeOf.I32 */),D=B.ptrFromOffset(C),w=B.ptrFromOffset(E);// commit to wasm memory, till rvalueOffset (the return value don't need to be committed)
-B.commitToWasmMemory(C),this.lib.checkCall(this.exports.TVMFuncCall(Q.getHandle(),B.ptrFromOffset(g),B.ptrFromOffset(I),A.length,D,w));let o=this.retValueToJS(D,this.memory.loadI32(w),!1);return this.lib.recycleCallStack(B),o};return B.dispose=()=>{Q.dispose()},B._tvmPackedCell=Q,B}/**
- * Creaye return value of the packed func. The value us auto-tracked for dispose.
- * @param rvaluePtr The location of rvalue
- * @param tcode The type code.
- * @param callbackArg Whether it is being used in callbackArg.
- * @returns The JS value.
- */retValueToJS(A,Q,B){switch(Q){case 0/* ArgTypeCode.Int */:case 1/* ArgTypeCode.UInt */:return this.memory.loadI64(A);case 2/* ArgTypeCode.Float */:return this.memory.loadF64(A);case 3/* ArgTypeCode.TVMOpaqueHandle */:return this.memory.loadPointer(A);case 13/* ArgTypeCode.TVMNDArrayHandle */:return this.ctx.attachToCurrentScope(new n(this.memory.loadPointer(A),!1,this.lib,this.ctx));case 7/* ArgTypeCode.TVMDLTensorHandle */:// no need to attach as we are only looking at view
-return h(B),new n(this.memory.loadPointer(A),!0,this.lib,this.ctx);case 10/* ArgTypeCode.TVMPackedFuncHandle */:return this.ctx.attachToCurrentScope(this.makePackedFunc(this.memory.loadPointer(A)));case 9/* ArgTypeCode.TVMModuleHandle */:return this.ctx.attachToCurrentScope(new m(this.memory.loadPointer(A),this.lib,A=>this.ctx.attachToCurrentScope(this.makePackedFunc(A))));case 8/* ArgTypeCode.TVMObjectHandle */:{let Q=new j(this.memory.loadPointer(A),this.lib,this.ctx),B=this.objFactory.get(Q.typeIndex());if(void 0!=B)return this.ctx.attachToCurrentScope(B(Q.getHandle(),this.lib,this.ctx));return this.ctx.attachToCurrentScope(Q)}case 4/* ArgTypeCode.Null */:return;case 6/* ArgTypeCode.DLDevice */:{let Q=this.memory.loadI32(A),B=this.memory.loadI32(A+4/* SizeOf.I32 */);return this.device(Q,B)}case 11/* ArgTypeCode.TVMStr */:{let Q=this.memory.loadCString(this.memory.loadPointer(A));return Q}case 12/* ArgTypeCode.TVMBytes */:return this.memory.loadTVMBytes(this.memory.loadPointer(A));default:throw Error("Unsupported return type code="+Q)}}}/**
- * Asynchrously instantiate a new {@link Instance}.
- *
- * importObject can also be a {@link LibraryProvider} object,
- * a WASI object, or an object containing wasmLibraryProvider field.
- * We can take benefit of syslib implementations from the Emscripten
- * by passing its generated js Module as the imports.
- *
- * @param bufferSource The source to be compiled.
- * @param importObject The import objects.
- * @param logger The system logger.
- */function T(A,Q={},B=console.log){let g=new s(Q,B);return WebAssembly.instantiate(A,g.imports).then(A=>new x(A.module,{},A.instance,g))}(w=I||(I={}))[w.InitHeader=0]="InitHeader",w[w.InitHeaderKey=1]="InitHeaderKey",w[w.InitServer=2]="InitServer",w[w.WaitForCallback=3]="WaitForCallback",w[w.ReceivePacketHeader=4]="ReceivePacketHeader",w[w.ReceivePacketBody=5]="ReceivePacketBody";/**
- * An utility class to read from binary bytes.
- */class P{constructor(A){this.offset=0,this.bytes=A}readU32(){let A=this.offset,Q=this.bytes,B=Q[A]|Q[A+1]<<8|Q[A+2]<<16|Q[A+3]<<24;return this.offset+=4,B}readU64(){let A=this.readU32();return this.offset+=4,A}readByteArray(){let A=this.readU64();h(this.offset+A<=this.bytes.byteLength);let Q=new Uint8Array(A);return Q.set(this.bytes.slice(this.offset,this.offset+A)),this.offset+=A,Q}}/**
- * A websocket based RPC
- */class l{constructor(A,Q,B,g=console.log,C="",E="cpu",D,w){this.state=I.InitHeader,this.pendingSend=Promise.resolve(),this.inst=void 0,this.globalObjects=[],this.currPacketLength=0,this.remoteKeyLength=0,this.pendingBytes=0,this.buffredBytes=0,this.messageQueue=[],this.url=A,this.key=Q,this.name="WebSocketRPCServer["+this.key+"]: ",this.getImports=B,this.logger=g,this.ndarrayCacheUrl=C,this.ndarrayCacheDevice=E,this.initProgressCallback=D,this.asyncOnServerLoad=w,this.checkLittleEndian(),this.socket="undefined"==typeof WebSocket?new AM:new WebSocket(A),this.socket.binaryType="arraybuffer",this.socket.addEventListener("open",A=>this.onOpen(A)),this.socket.addEventListener("message",A=>this.onMessage(A)),this.socket.addEventListener("close",A=>this.onClose(A))}// eslint-disable-next-line @typescript-eslint/no-unused-vars
-onClose(A){void 0!==this.inst&&(this.globalObjects.forEach(A=>{A.dispose()}),this.log(this.inst.runtimeStatsText()),this.inst.dispose()),this.state===I.ReceivePacketHeader?(this.log("Closing the server in clean state"),this.log("Automatic reconnecting.."),new l(this.url,this.key,this.getImports,this.logger,this.ndarrayCacheUrl,this.ndarrayCacheDevice,this.initProgressCallback,this.asyncOnServerLoad)):this.log("Closing the server, final state="+this.state)}// eslint-disable-next-line @typescript-eslint/no-unused-vars
-onOpen(A){// Send the headers
-let Q=N("server:"+this.key);Q=Q.slice(0,Q.length-1);let B=new Int32Array(1);B[0]=1045105,this.socket.send(B),B[0]=Q.length,this.socket.send(B),this.socket.send(Q),this.log("connected..."),// request bytes: magic + keylen
-this.requestBytes(8/* SizeOf.I32 */),this.state=I.InitHeader}/** Handler for raw message. */onMessage(A){let Q=A.data;this.buffredBytes+=Q.byteLength,this.messageQueue.push(new Uint8Array(Q)),this.processEvents()}/** Process ready events. */processEvents(){for(;this.buffredBytes>=this.pendingBytes&&0!=this.pendingBytes;)this.onDataReady()}/** State machine to handle each request */onDataReady(){switch(this.state){case I.InitHeader:this.handleInitHeader();break;case I.InitHeaderKey:this.handleInitHeaderKey();break;case I.ReceivePacketHeader:{this.currPacketHeader=this.readFromBuffer(8/* SizeOf.I64 */);let A=new P(this.currPacketHeader);this.currPacketLength=A.readU64(),h(0===this.pendingBytes),this.requestBytes(this.currPacketLength),this.state=I.ReceivePacketBody;break}case I.ReceivePacketBody:{let A=this.readFromBuffer(this.currPacketLength);h(0===this.pendingBytes),h(void 0!==this.currPacketHeader),this.onPacketReady(this.currPacketHeader,A);break}case I.WaitForCallback:h(0===this.pendingBytes);break;default:throw Error("Cannot handle state "+this.state)}}onPacketReady(A,Q){if(void 0===this.inst){// initialize server.
-let B=new P(Q);// eslint-disable-next-line @typescript-eslint/no-unused-vars
-B.readU32(),// eslint-disable-next-line @typescript-eslint/no-unused-vars
-F(B.readByteArray());let g=B.readU32(),I=[],C=[];for(let A=0;A