diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Descargar crack batman arkham asylum pc windows 7 el mejor sitio para obtener el juego y el crack.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Descargar crack batman arkham asylum pc windows 7 el mejor sitio para obtener el juego y el crack.md deleted file mode 100644 index f894fb9354b2495b4f58972ebf6a601dcb2f812b..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Descargar crack batman arkham asylum pc windows 7 el mejor sitio para obtener el juego y el crack.md +++ /dev/null @@ -1,123 +0,0 @@ - -

How to Download Crack Batman Arkham Asylum PC Windows 7

-

Are you a fan of Batman and want to play one of the best games based on his comic series? Do you want to experience the thrill of fighting against the Joker and his henchmen in a dark and twisted asylum? Do you want to save money and play the game for free? If you answered yes to any of these questions, then this article is for you. In this article, I will show you how to download crack batman arkham asylum pc windows 7 and enjoy the game without any hassle.

-

descargar crack batman arkham asylum pc windows 7


Download Filehttps://byltly.com/2uKyvy



-

Introduction

-

Batman Arkham Asylum is a game developed by Rocksteady Studios and published by Warner Bros. Interactive Entertainment in 2009. It is an action-adventure game that features stealth, combat, exploration, and puzzle-solving elements. The game follows Batman as he tries to stop the Joker from taking over the Arkham Asylum, a psychiatric facility that houses some of Gotham's most notorious criminals. The game received critical acclaim for its story, gameplay, graphics, voice acting, and atmosphere. It also won several awards, including Game of the Year from various publications.

-

What is Batman Arkham Asylum?

-

Batman Arkham Asylum is a game that puts you in the shoes of Batman, the world's greatest detective and superhero. You will use your skills, gadgets, and abilities to fight against the Joker and his allies, who have taken over the asylum and unleashed chaos. You will also encounter other famous villains from the Batman universe, such as Harley Quinn, Poison Ivy, Killer Croc, Scarecrow, and Bane. You will also have to deal with the mysterious Riddler, who has hidden hundreds of trophies and riddles throughout the asylum for you to find and solve.

-

Why do you need a crack?

-

A crack is a file that modifies or bypasses the original protection of a game or software. It allows you to run the game or software without having to purchase it or enter a serial key. A crack can also fix some bugs or errors that may occur in the original version. However, cracking a game or software is illegal and may cause harm to your computer or device. Therefore, you should only download cracks from trusted sources and at your own risk.

-

descargar crack batman arkham asylum pc windows 7 mega
-descargar crack batman arkham asylum pc windows 7 64 bits
-descargar crack batman arkham asylum pc windows 7 español
-descargar crack batman arkham asylum pc windows 7 full
-descargar crack batman arkham asylum pc windows 7 gratis
-descargar crack batman arkham asylum pc windows 7 sin virus
-descargar crack batman arkham asylum pc windows 7 utorrent
-descargar crack batman arkham asylum pc windows 7 iso
-descargar crack batman arkham asylum pc windows 7 skidrow
-descargar crack batman arkham asylum pc windows 7 reloaded
-descargar crack batman arkham asylum pc windows 7 goty
-descargar crack batman arkham asylum pc windows 7 steam
-descargar crack batman arkham asylum pc windows 7 no cd
-descargar crack batman arkham asylum pc windows 7 fix
-descargar crack batman arkham asylum pc windows 7 patch
-descargar crack batman arkham asylum pc windows 7 gamecopyworld
-descargar crack batman arkham asylum pc windows 7 razor1911
-descargar crack batman arkham asylum pc windows 7 online
-descargar crack batman arkham asylum pc windows 7 mediafire
-descargar crack batman arkham asylum pc windows 7 megaupload
-descargar crack batman arkham asylum pc windows 7 rapidshare
-descargar crack batman arkham asylum pc windows 7 fileserve
-descargar crack batman arkham asylum pc windows 7 filefactory
-descargar crack batman arkham asylum pc windows 7 depositfiles
-descargar crack batman arkham asylum pc windows 7 hotfile
-descargar crack batman arkham asylum pc windows 7 zippyshare
-descargar crack batman arkham asylum pc windows 7 freakshare
-descargar crack batman arkham asylum pc windows 7 bitshare
-descargar crack batman arkham asylum pc windows 7 uploaded
-descargar crack batman arkham asylum pc windows 7 netload
-descargar crack batman arkham asylum pc windows 7 letitbit
-descargar crack batman arkham asylum pc windows 7 turbobit
-descargar crack batman arkham asylum pc windows 7 shareflare
-descargar crack batman arkham asylum pc windows 7 extabit
-descargar crack batman arkham asylum pc windows 7 crocko
-descargar crack batman arkham asylum pc windows 7 oron
-descargar crack batman arkham asylum pc windows 7 wupload
-descargar crack batman arkham asylum pc windows 7 uploadstation
-descargar crack batman arkham asylum pc windows 7 filesonic
-descargar crack batman arkham asylum pc windows 7 filejungle
-descargar crack batman arkham asylum pc windows 7 filepost
-descargar crack batman arkham asylum pc windows 7 filesmonster
-descargar crack batman arkham asylum pc windows 7 easy-share
-descargar crack batman arkham asylum pc windows 7 uploading.com
-descargar crack batman arkham asylum pc windows 7 uploaded.to
-descargar crack batman arkham asylum pc windows 7 bayfiles.com
-descargar crack batman arkham asylum pc windows 7 putlocker.com
-descargar crack batman arkham asylum pc windows 7 sockshare.com
-descargar crack batman arkham asylum pc windows 7 rapidgator.net

-

How to download and install the crack

-

In order to download crack batman arkham asylum pc windows 7, you will need two things: the game of the year edition and the crack file. The game of the year edition is a remastered version of the original game that includes four extra challenge maps. The crack file is a file that will allow you to run the game without any problems. Here are the steps you need to follow:

-

Step 1: Download the game of the year edition

-

The first step is to download the game of the year edition from a reliable website. One such website is GOG Unlocked, which offers free downloads of various games. You can find Batman Arkham Asylum Game of the Year Edition on their website by searching for it or clicking on this link. Once you are on their website, click on the blue "download now" button and wait for 5 seconds. Then, click on "create download link" and wait for another 5 seconds. Finally, click on "click here to download" and save the file on your computer.

-

Step 2: Extract the game files

-

The second step is to extract the game files from the zip file that you downloaded. You will need a software that can extract zip files, such as 7-Zip, which you can get here. Once you have installed 7-Zip or any other similar software, right-click on the zip file and select "Extract here" or "Extract to Batman: Arkham Asylum Game of the Year Edition v1.1". This will create a folder with all the game files inside.

-

Step 3: Download the crack file

-

The third step is to download the crack file from another reliable website. One such website is MegaGames, which offers various fixes and patches for games. You can find Batman: Arkham Asylum - Game of the Year v1.1 All No-DVD [Prophet] on their website by searching for it or clicking on this link. Once you are on their website, scroll down until you see a blue "Download" button and click on it. Then, save the zip file on your computer.

-

Step 4: Copy and paste the crack file into the game folder

-

The fourth step is to copy and paste the crack file into the game folder that you extracted earlier. To do this, right-click on the zip file that contains the crack file and select "Extract here" or "Extract to BATMAN.AA.GOTY.V1.1.ALL.PROPHET.NODVD". This will create a folder with the crack file inside. Then, open the folder and copy the file named "Binaries". Then, go back to the game folder that contains all the game files and paste the copied file into it. You may be asked to replace or overwrite some files. Click on "Yes" to all.

-

Step 5: Run the game as administrator

-

The final step is to run the game as administrator. To do this, go to the game folder and double-click on the file named "BmStartApp.exe". This will launch the game. However, before you play, you should right-click on the file and select "Properties". Then, go to the "Compatibility" tab and check the box that says "Run this program as an administrator". This will ensure that the game runs smoothly without any errors.

-

Tips and tricks for playing the game

-

Now that you have downloaded crack batman arkham asylum pc windows 7, you are ready to enjoy the game. However, if you want to get the most out of it, you should follow some tips and tricks that will help you improve your skills and have more fun. Here are some of them:

-

How to use the freeflow combat system

-

The freeflow combat system is one of the main features of the game. It allows you to chain together unlimited combos seamlessly and battle with huge groups of enemies in brutal melee brawls. To use it effectively, you should follow these steps:

- -

By using the freeflow combat system, you will be able to defeat your enemies with style and efficiency. You will also earn more experience points and unlock new moves and upgrades for your combat skills.

-

How to solve the riddler's puzzles

-

The riddler's puzzles are another feature of the game that will challenge your mind and reward you with secrets and collectibles. The riddler's puzzles consist of six types: Chronicles of Arkham, "Mystery", Patient Interview Tapes, Riddles, Riddler Trophies, and Teeth. Each area of the asylum has a number of these puzzles for you to find and solve. To solve them, you will need to use your detective mode, your gadgets, and your logic. Here are some tips for solving them:

- -

By solving the riddler's puzzles, you will be able to unlock concept art, character bios, challenge maps, achievements, and trophies. You will also be able to confront the Riddler himself once you have solved all of his puzzles.

-

How to unlock the extra challenge maps

-

The extra challenge maps are another feature of the game that will test your skills and abilities in different scenarios. The challenge maps are divided into two types: combat and predator. In combat challenge maps, you have to fight waves of enemies and score as many points as possible by using combos and takedowns. In predator challenge maps, you have to stealthily take out enemies without being detected or killed. You can access the challenge maps from the main menu by selecting "Challenge Mode". To unlock them, you have to do one of these things:

- -

By playing the challenge maps, you will be able to improve your skills and compete with other players on online leaderboards. You will also earn more experience points and unlock new costumes for Batman.

-

Conclusion

-

Batman Arkham Asylum is a game that will immerse you in the dark and twisted world of Batman and his enemies. It is a game that will make you feel like you are Batman himself as you use your skills, gadgets, and abilities to stop the Joker's plan. It is a game that will offer you hours of fun and entertainment with its story mode, challenge mode, and riddler's puzzles. It is a game that you can play for free by downloading crack batman arkham asylum pc windows 7 from reliable websites.

-

Summary of the main points

-

In this article, I have shown you how to download crack batman arkham asylum pc windows 7 and enjoy the game without any hassle. I have also given you some tips and tricks for playing the game and getting the most out of it. Here are the main points that you should remember:

- -

Call to action

-

If you are ready to play one of the best games based on Batman's comic series, then don't wait any longer. Download crack batman arkham asylum pc windows 7 today and start your adventure in Arkham Asylum. You won't regret it!

- **FAQs** Q: Is it safe to download crack batman arkham asylum pc windows 7? A: Downloading cracks is illegal and may cause harm to your computer or device. Therefore, you should only download cracks from trusted sources and at your own risk. Q: What are the system requirements for playing batman arkham asylum pc windows 7? A: The minimum system requirements are: - OS: Windows XP/Vista/7 - Processor: INTEL 2.4 GHz Dual Core - RAM: 2 GB - Video Memory: 256 MB - Video Card: NVIDIA GeForce 6600 or ATI Radeon X1300 - Sound Card: DirectX Compatible - DirectX: 9.0c - Hard Drive: 8 GB free Q: How long is the story mode of batman arkham asylum pc windows 7? A: The story mode of batman arkham asylum pc windows 7 takes about 10 to 15 hours to complete depending on your difficulty level and playstyle. Q: How many characters are there in batman arkham asylum pc windows 7? A: There are over 20 characters in batman arkham asylum pc windows 7 that you can encounter or play as. Some of them are: - Batman - The Joker - Harley Quinn - Commissioner Gordon - Oracle - Alfred - Scarecrow - Poison Ivy - Killer Croc - Bane - Zsasz - The Riddler Q: How can I get more costumes for Batman in batman arkham asylum pc windows 7? A: You can get more costumes for Batman in batman arkham asylum pc windows 7 by earning more experience points and unlocking new upgrades for your combat skills. You can also get more costumes by playing the challenge maps or downloading DLCs.

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Serum The Best Wavetable Synthesizer Ever.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Serum The Best Wavetable Synthesizer Ever.md deleted file mode 100644 index 7c4a77ad940d75416464fcea65faa2a1e1ae2575..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Serum The Best Wavetable Synthesizer Ever.md +++ /dev/null @@ -1,31 +0,0 @@ - -

How to Download Serum: A Complete Guide for Beginners

-

Serum is one of the most popular and powerful synthesizers in the music production industry. It allows you to create stunning sounds and effects with its wavetable-based engine, flexible modulation options, and intuitive interface. But how do you download Serum and get started with it?

-

In this article, we will show you how to download Serum from its official website, install it on your computer, and activate it with a license key. We will also give you some tips on how to use Serum effectively and where to find more resources and tutorials. Let's dive in!

-

download serum crack


Download File ····· https://byltly.com/2uKvFI



-

How to Download Serum from the Official Website

-

The first step to download Serum is to visit the official website of Xfer Records, the company that created Serum. You can find it at https://xferrecords.com/products/serum. There, you will see a button that says "Buy Now". Click on it and you will be redirected to a page where you can choose your payment method and enter your details.

-

Once you complete the payment, you will receive an email with a download link and a license key. The download link will take you to a page where you can choose your operating system (Windows or Mac) and download the installer file. The license key is a code that you will need to activate Serum later.

-

Download the installer file and save it somewhere on your computer. Then, double-click on it and follow the instructions to install Serum on your computer. The installation process is very simple and straightforward. You just need to agree to the terms and conditions, choose a destination folder, and wait for the installation to finish.

-

How to Activate Serum with a License Key

-

After installing Serum on your computer, you need to activate it with your license key. This is a very important step because it will unlock all the features and functions of Serum and prevent any issues or errors.

-

To activate Serum, open your DAW (digital audio workstation) of choice and load Serum as a plugin. You can find it in your VST or AU folder, depending on your operating system and DAW. Once you load Serum, you will see a window that asks you to enter your license key.

-

Copy and paste your license key from the email that you received after purchasing Serum. Make sure that you enter it exactly as it appears in the email, without any spaces or extra characters. Then, click on "Register" and wait for a confirmation message. If everything goes well, you should see a message that says "Thank you for registering Serum!"

-

Congratulations! You have successfully downloaded and activated Serum on your computer. You can now start using it and explore its amazing features.

-

How to Use Serum Effectively

-

Serum is a very versatile and powerful synthesizer that can help you create any sound or effect that you can imagine. However, it can also be overwhelming at first, especially if you are new to synthesis or wavetable synthesis in particular.

-

-

That's why we recommend that you start by learning the basics of Serum and how it works. You can do this by reading the manual that comes with Serum or watching some online tutorials that explain the main features and functions of Serum.

-

Some of the things that you should learn are:

- -

Once you master these basics, you can move on to more advanced topics and techniques that will help you take your sound design skills to the next level. You can also experiment with different combinations of wavetables, modulations, filters, effects, and macros to create unique and original sounds.

-

Where to Find More Resources and Tutorials

-

If you want to learn more about Serum and how to use it effectively, there are plenty of resources and tutorials available online that can help you. Here are some of the best ones that we recommend:

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures.md b/spaces/1gistliPinn/ChatGPT4/Examples/8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures.md deleted file mode 100644 index b02ad573e19cd794d36ca18f89e022f88a10e679..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures.md +++ /dev/null @@ -1,8 +0,0 @@ - -

reg_ff,8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures,acaktoolsprofessional. 8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures,sniperGhostWarrior2ThievesOftheGameRetail. 8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures,Game. 8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures,6 hour gameshack. 8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures,fx_master_v2_0_windows.

-

reg_ff,8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures,atf. ac8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures,8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures,MEGA C VST DVDAudio3 V2.0.0. Download DVDAudio3. Download DVDAudio3. VST-format processing of short-wave sounds and sounds 8-bit 2-channel,16-bit.

-

8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures


Download Filehttps://imgfil.com/2uxZ3h



-

O4,8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures,8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures,Auto-Tune 7 VST PCv7.0.6. 8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures,acaktoolsprofessional. 8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures,8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures,Acoustica VST 5.2.8. 8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures,Drive Audio Pro 8 VST 2.

-

6way metatun,8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures. 8way x1 album Album Album Album Album Album Album Album Album Album Album 8x12PsdKarizmaAlbumBackgroundsFreeDownloadDownloadFullVersionPictures,libertalia.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Ali-rs232-upgrade-tool-v1-2-0 !LINK! Downloader Full.md b/spaces/1gistliPinn/ChatGPT4/Examples/Ali-rs232-upgrade-tool-v1-2-0 !LINK! Downloader Full.md deleted file mode 100644 index 8e14b5656ebe35af1c55f6102ec6626e6382a950..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Ali-rs232-upgrade-tool-v1-2-0 !LINK! Downloader Full.md +++ /dev/null @@ -1,13 +0,0 @@ -

ali-rs232-upgrade-tool-v1-2-0 downloader Full


Download Filehttps://imgfil.com/2uy0MY



-
-Aug 9, 2018 - 8, ALI RS232 Upgrade Tool, v 1.2.0 03.2012, Download, SR-2000HD Hyper. 9, GXD Loader, v 1.010, Download, SR-8989HD. 10, Multi Tool Box 2018 v1.5, Download, SMC-NOTES, v 0.9.1, Download,. -Alcatel, All in One, All in One Tool, All in One Tool Lite, All in One Tool Pro. -All in One Tool Premium, All in One Tool Pro, All in One Tool. -Alcatel, All in One, All in One Tool, All in One Tool Pro. -Alcatel, All in One Tool, All in One Tool lite, All in One Tool. -Alcatel, All in One Tool Pro, All in One Tool Pro lite, All in One Tool lite. -Alcatel, All in One Tool Lite, All in One Tool Pro. -Alcatel, All in One Tool Pro, All in One Tool Pro lite. 8a78ff9644
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bus Simulator Indonesia Philippines Mod Apk Discover the Amazing Features of the Philippine Bus Simulator Mod with Bussid Skin Philippines Download.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bus Simulator Indonesia Philippines Mod Apk Discover the Amazing Features of the Philippine Bus Simulator Mod with Bussid Skin Philippines Download.md deleted file mode 100644 index db82f97c857e5d2927fca8baa13d81259ea137f0..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bus Simulator Indonesia Philippines Mod Apk Discover the Amazing Features of the Philippine Bus Simulator Mod with Bussid Skin Philippines Download.md +++ /dev/null @@ -1,139 +0,0 @@ - -

Bus Simulator Indonesia Philippines Mod Apk: A Guide for Gamers

-

If you are a fan of bus simulation games, you might have heard of Bus Simulator Indonesia, a popular game that lets you experience the thrill of driving a bus in Indonesia. But did you know that there is a modded version of this game that adds more features and fun to the gameplay? In this article, we will tell you everything you need to know about Bus Simulator Indonesia Philippines Mod Apk, a mod that allows you to drive buses in the Philippines. We will also show you how to download and install this mod on your Android device, and how to play it like a pro. So, buckle up and get ready for an exciting ride!

-

What is Bus Simulator Indonesia?

-

Bus Simulator Indonesia, also known as BUSSID, is a realistic bus simulation game developed by Maleo. It was released in 2017 and has since gained millions of downloads and positive reviews from players around the world. The game lets you drive various types of buses in different cities and regions of Indonesia, such as Jakarta, Bali, Sumatra, Java, and more. You can also customize your bus with different liveries, accessories, horns, stickers, and more. The game features realistic graphics, physics, traffic, weather, and sounds that make you feel like you are really driving a bus in Indonesia.

-

bus simulator indonesia philippines mod apk


Download Zip ••• https://urlin.us/2uT2NT



-

Features of Bus Simulator Indonesia

-

Some of the features that make Bus Simulator Indonesia stand out from other bus simulation games are:

- -

How to download and install Bus Simulator Indonesia on Android

-

If you want to play Bus Simulator Indonesia on your Android device, you can easily download and install it from the Google Play Store. Here are the steps to do so:

-
    -
  1. Open the Google Play Store app on your device.
  2. -
  3. Search for "Bus Simulator Indonesia" in the search bar.
  4. -
  5. Select the game from the list of results and tap on "Install".
  6. -
  7. Wait for the game to download and install on your device.
  8. -
  9. Once the installation is complete, tap on "Open" to launch the game.
  10. -
-

Congratulations! You have successfully installed Bus Simulator Indonesia on your Android device. You can now start playing the game and enjoy driving buses in Indonesia.

-

What is Bus Simulator Indonesia Philippines Mod Apk?

-

Bus Simulator Indonesia Philippines Mod Apk is a modified version of Bus Simulator Indonesia that adds more features and fun to the original game. As the name suggests, this mod allows you to drive buses in the Philippines, instead of Indonesia. You can explore various cities and regions of the Philippines, such as Manila, Cebu, Davao, Baguio, Boracay, and more. You can also choose from different types of buses that are popular in the Philippines, such as jeepneys, coasters, minibuses, double-deckers, etc. You can also customize your bus with different liveries, accessories, horns, stickers, and more. The mod also features realistic graphics, physics, traffic, weather, and sounds that make you feel like you are really driving a bus in the Philippines.

-

Benefits of using Bus Simulator Indonesia Philippines Mod Apk

-

Some of the benefits of using Bus Simulator Indonesia Philippines Mod Apk are:

- -

Risks of using Bus Simulator Indonesia Philippines Mod Apk

-

However, using Bus Simulator Indonesia Philippines Mod Apk also comes with some risks that you should be aware of. Some of the risks are:

- -

Therefore, you should use Bus Simulator Indonesia Philippines Mod Apk at your own risk and discretion. We are not responsible for any damages or losses that may occur as a result of using this mod.

-

bussid mod bus philippines apk
-bussid mod philippines map download
-bussid skin traffic philippines apk
-bussid mod philippines victorya linear
-bussid mod philippines truck livery
-bussid mod bus simulator indonesia philippine bus
-bussid mod philippines car download
-bussid mod philippines community join
-bussid mod bus simulator the best philippines
-bussid mod philippines 2023 update
-bussid skin philippines download free
-bussid mod philippines map dtutorial
-bussid mod bus simulator vietnam philippines
-bussid mod philippines exfoh diesel
-bussid mod bus simulator thailand philippines
-bussid mod bus simulator cambodia philippines
-bussid mod bus simulator myanmar philippines
-bussid mod bus simulator malaysia philippines
-bussid skin traffic philippines download latest
-bussid mod bus simulator asia philippines
-bussid weebly mod bus simulator indonesia philippine
-bussid review mod map bus simulator indonesia philippine
-bussid 2023 full strobe livery bus simulator indonesia philippine
-bussid hd 2023 bus simulator indonesia philippine
-bussid mbois 2023 bus simulator indonesia philippine
-bussid sr exfoh ordinary bus simulator indonesia philippine
-bussid tourism bus mod bus simulator indonesia philippine
-bussid damaged road map mod bus simulator indonesia philippine
-bussid muddy road map mod bus simulator indonesia philippine
-bussid dragon bend map mod bus simulator indonesia philippine
-bussid bend 44 map mod bus simulator indonesia philippine
-bussid complete variant map mod bus simulator indonesia philippine
-bussid complete variant skin mod bus simulator indonesia philippine
-bussid complete variant truck mod bus simulator indonesia philippine
-bussid complete variant car mod bus simulator indonesia philippine
-bussid complete variant livery mod bus simulator indonesia philippine
-bussid complete variant traffic mod bus simulator indonesia philippine
-bussid holy grail fusion experiment mini sun bus simulator indonesia philippine (just kidding 😜)
-bussid net energy gain nuclear fusion reaction bus simulator indonesia philippine (also kidding 😂)
-bussid superconducting tokamak advanced research facility korea institute of fusion energy bus simulator indonesia philippine (okay, I'll stop now 😅)

-

How to download and install Bus Simulator Indonesia Philippines Mod Apk on Android

-

If you want to try Bus Simulator Indonesia Philippines Mod Apk on your Android device, you will need to download and install it from a third-party source. This is because this mod is not available on the Google Play Store or any other official app store. Here are the steps to do so:

-

Steps to download and install Bus Simulator Indonesia Philippines Mod Apk

-
    -
  1. First, you will need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
  2. -
  3. Next, you will need to download the Bus Simulator Indonesia Philippines Mod Apk file from a reliable website. You can search for it on Google or use this link: . Make sure you download the latest version of the mod that is compatible with your device and the original game.
  4. -
  5. After downloading the file, locate it in your device's file manager and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to complete.
  6. -
  7. Once the installation is done, you will see a new icon for Bus Simulator Indonesia Philippines Mod Apk on your device's home screen. Tap on it to launch the game.
  8. -
-

Congratulations! You have successfully installed Bus Simulator Indonesia Philippines Mod Apk on your Android device. You can now start playing the game and enjoy driving buses in the Philippines.

How to play Bus Simulator Indonesia Philippines Mod Apk

-

Now that you have downloaded and installed Bus Simulator Indonesia Philippines Mod Apk on your Android device, you might be wondering how to play it. Don't worry, we have got you covered. In this section, we will give you some tips and tricks on how to play Bus Simulator Indonesia Philippines Mod Apk like a pro.

-

How to choose and customize your bus

-

One of the fun aspects of Bus Simulator Indonesia Philippines Mod Apk is that you can choose and customize your own bus. You can do this by tapping on the garage icon on the main menu. Here, you can see a list of buses that you can buy or unlock with coins or diamonds. You can also see the stats and features of each bus, such as speed, acceleration, braking, handling, fuel capacity, etc. To buy or unlock a bus, simply tap on the buy or unlock button and confirm your purchase.

-

Once you have bought or unlocked a bus, you can customize it with different liveries, accessories, horns, stickers, and more. You can do this by tapping on the customize icon on the garage menu. Here, you can see a list of categories that you can modify, such as body, wheels, lights, interior, etc. To customize a category, simply tap on it and choose from the available options. You can also see a preview of how your bus will look like after customization. To apply your changes, simply tap on the apply button and confirm your customization.

-

How to drive and control your bus

-

Another fun aspect of Bus Simulator Indonesia Philippines Mod Apk is that you can drive and control your bus in a realistic way. You can do this by tapping on the drive icon on the main menu. Here, you can see a map of the Philippines where you can choose your starting point and destination. You can also see the distance and time of your trip, as well as the traffic and weather conditions. To start your trip, simply tap on the start button and wait for the loading screen.

-

Once you are in the game, you can see a dashboard that shows your speedometer, fuel gauge, gear indicator, steering wheel, horn button, indicators, headlights, wipers, etc. You can also see a mini-map that shows your location and direction. To control your bus, you can use the following options:

- -

To drive safely and smoothly, you should follow the rules and regulations of Philippine traffic. You should also pay attention to the traffic signs, signals, pedestrians, vehicles, obstacles, etc. that you encounter along the way. You should also avoid crashing or damaging your bus as much as possible.

How to complete missions and earn rewards

-

The last fun aspect of Bus Simulator Indonesia Philippines Mod Apk is that you can complete missions and earn rewards. You can do this by tapping on the mission icon on the main menu. Here, you can see a list of missions that you can accept and complete. Each mission has a different objective, such as transporting passengers, delivering cargo, reaching a destination, etc. Each mission also has a different difficulty level, such as easy, medium, hard, etc. To accept a mission, simply tap on the accept button and start your trip.

-

Once you are in the game, you can see a mission indicator that shows your progress and status. You can also see a timer that shows how much time you have left to complete the mission. To complete a mission, you have to follow the instructions and objectives that are given to you. You have to also avoid failing or aborting the mission by crashing, running out of fuel, breaking the law, etc.

-

When you complete a mission successfully, you will earn rewards such as coins, diamonds, experience points, etc. You can use these rewards to buy or unlock new buses and upgrades, or to access new features and modes. You can also see your rank and achievements on the leaderboard and compare them with other players.

-

Conclusion

-

Bus Simulator Indonesia Philippines Mod Apk is a fun and exciting mod that adds more features and fun to the original Bus Simulator Indonesia game. It allows you to drive buses in the Philippines, instead of Indonesia. You can also choose from different types of buses and customize them. You can also complete missions and earn rewards. However, you should also be aware of the risks of using this mod, such as compatibility issues, bugs, glitches, bans, malware, viruses, etc. You should also use this mod at your own risk and discretion.

-

If you are interested in trying Bus Simulator Indonesia Philippines Mod Apk on your Android device, you can follow the steps that we have provided in this article. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!

-

FAQs

-

Here are some frequently asked questions about Bus Simulator Indonesia Philippines Mod Apk:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/2023Liu2023/bingo/src/components/toaster.tsx b/spaces/2023Liu2023/bingo/src/components/toaster.tsx deleted file mode 100644 index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000 --- a/spaces/2023Liu2023/bingo/src/components/toaster.tsx +++ /dev/null @@ -1,3 +0,0 @@ -'use client' - -export { Toaster } from 'react-hot-toast' diff --git a/spaces/221090Lstwcm/textgenerator/README.md b/spaces/221090Lstwcm/textgenerator/README.md deleted file mode 100644 index 32e619608b009e0b419d34b0274f804aefb15a92..0000000000000000000000000000000000000000 --- a/spaces/221090Lstwcm/textgenerator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Textgenerator -emoji: 🌍 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/232labs/VToonify/vtoonify/model/encoder/encoders/model_irse.py b/spaces/232labs/VToonify/vtoonify/model/encoder/encoders/model_irse.py deleted file mode 100644 index 6698d9705321dd4a27681ea15204e9ffaa51f62a..0000000000000000000000000000000000000000 --- a/spaces/232labs/VToonify/vtoonify/model/encoder/encoders/model_irse.py +++ /dev/null @@ -1,84 +0,0 @@ -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module -from model.encoder.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm - -""" -Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Backbone(Module): - def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True): - super(Backbone, self).__init__() - assert input_size in [112, 224], "input_size should be 112 or 224" - assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152" - assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se" - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - if input_size == 112: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 7 * 7, 512), - BatchNorm1d(512, affine=affine)) - else: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 14 * 14, 512), - BatchNorm1d(512, affine=affine)) - - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer(x) - return l2_norm(x) - - -def IR_50(input_size): - """Constructs a ir-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_101(input_size): - """Constructs a ir-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_152(input_size): - """Constructs a ir-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_50(input_size): - """Constructs a ir_se-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_101(input_size): - """Constructs a ir_se-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_152(input_size): - """Constructs a ir_se-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False) - return model diff --git a/spaces/7hao/bingo/src/pages/api/create.ts b/spaces/7hao/bingo/src/pages/api/create.ts deleted file mode 100644 index 508fa97ef609cbb215a61085711638e116235ebe..0000000000000000000000000000000000000000 --- a/spaces/7hao/bingo/src/pages/api/create.ts +++ /dev/null @@ -1,31 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { fetch, debug } from '@/lib/isomorphic' -import { createHeaders } from '@/lib/utils' - -// const API_ENDPOINT = 'https://www.bing.com/turing/conversation/create' -const API_ENDPOINT = 'https://edgeservices.bing.com/edgesvc/turing/conversation/create'; - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const headers = createHeaders(req.cookies) - - res.writeHead(200, { - 'Content-Type': 'application/json', - }) - - debug('headers', headers) - const response = await fetch(API_ENDPOINT, { method: 'GET', headers }) - .then((res) => res.text()) - - res.end(response) - } catch (e) { - return res.end(JSON.stringify({ - result: { - value: 'UnauthorizedRequest', - message: `${e}` - } - })) - } -} diff --git a/spaces/ADOPLE/Multi-Doc-Virtual-Chatbot/style.css b/spaces/ADOPLE/Multi-Doc-Virtual-Chatbot/style.css deleted file mode 100644 index 03cfcd6816530d32c1a8ea6c85547fc277b4c331..0000000000000000000000000000000000000000 --- a/spaces/ADOPLE/Multi-Doc-Virtual-Chatbot/style.css +++ /dev/null @@ -1,38 +0,0 @@ -#col-container { - max-width: 1000px; - margin-left: auto; - margin-right: auto; -} -.heightfit{ - height:120px; -} - -#row-flex { - display: flex; - align-items: center; - justify-content: center; -} -.leftimage .rightimage{ - float:left; -} -.leftimage{ - padding-top:27px; - margin-left:210px; -} -.rightimage{ - margin-right:210px; - margin-top:15px; -} -a, -a:hover, -a:visited { - text-decoration-line: underline; - font-weight: 600; - color: #1f2937 !important; -} - -.dark a, -.dark a:hover, -.dark a:visited { - color: #f3f4f6 !important; -} diff --git a/spaces/AI-Dashboards/Graph.NLP.Sentence.Similarity.Heatmap.KMeansCluster/README.md b/spaces/AI-Dashboards/Graph.NLP.Sentence.Similarity.Heatmap.KMeansCluster/README.md deleted file mode 100644 index 588cca7b119e55469da63891f957e82cf529cccf..0000000000000000000000000000000000000000 --- a/spaces/AI-Dashboards/Graph.NLP.Sentence.Similarity.Heatmap.KMeansCluster/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 06SL NLP SentenceSimilarity Heatmap Cluster -emoji: 📊 -colorFrom: indigo -colorTo: pink -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIGText/GlyphControl/ldm/modules/diffusionmodules/model.py b/spaces/AIGText/GlyphControl/ldm/modules/diffusionmodules/model.py deleted file mode 100644 index 4e6453cb35ecb3b9106f5d658244532c5ec2f1e6..0000000000000000000000000000000000000000 --- a/spaces/AIGText/GlyphControl/ldm/modules/diffusionmodules/model.py +++ /dev/null @@ -1,853 +0,0 @@ -# pytorch_diffusion + derived encoder decoder -import math -import torch -import torch.nn as nn -import numpy as np -from einops import rearrange -from typing import Optional, Any - -from ldm.modules.attention import MemoryEfficientCrossAttention - -try: - import xformers - import xformers.ops - XFORMERS_IS_AVAILBLE = True -except Exception as e: - print("xformer", e) - XFORMERS_IS_AVAILBLE = False - print("No module 'xformers'. Proceeding without it.") -# XFORMERS_IS_AVAILBLE = False - -def get_timestep_embedding(timesteps, embedding_dim): - """ - This matches the implementation in Denoising Diffusion Probabilistic Models: - From Fairseq. - Build sinusoidal embeddings. - This matches the implementation in tensor2tensor, but differs slightly - from the description in Section 3.5 of "Attention Is All You Need". - """ - assert len(timesteps.shape) == 1 - - half_dim = embedding_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb) - emb = emb.to(device=timesteps.device) - emb = timesteps.float()[:, None] * emb[None, :] - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) - if embedding_dim % 2 == 1: # zero pad - emb = torch.nn.functional.pad(emb, (0,1,0,0)) - return emb - - -def nonlinearity(x): - # swish - return x*torch.sigmoid(x) - - -def Normalize(in_channels, num_groups=32): - return torch.nn.GroupNorm(num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True) - - -class Upsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest") - if self.with_conv: - x = self.conv(x) - return x - - -class Downsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=2, - padding=0) - - def forward(self, x): - if self.with_conv: - pad = (0,1,0,1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - else: - x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2) - return x - - -class ResnetBlock(nn.Module): - def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False, - dropout, temb_channels=512): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.use_conv_shortcut = conv_shortcut - - self.norm1 = Normalize(in_channels) - self.conv1 = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if temb_channels > 0: - self.temb_proj = torch.nn.Linear(temb_channels, - out_channels) - self.norm2 = Normalize(out_channels) - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = torch.nn.Conv2d(out_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - self.conv_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - else: - self.nin_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x, temb): - h = x - h = self.norm1(h) - h = nonlinearity(h) - h = self.conv1(h) - - if temb is not None: - h = h + self.temb_proj(nonlinearity(temb))[:,:,None,None] - - h = self.norm2(h) - h = nonlinearity(h) - h = self.dropout(h) - h = self.conv2(h) - - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - x = self.conv_shortcut(x) - else: - x = self.nin_shortcut(x) - - return x+h - - -class AttnBlock(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = q.reshape(b,c,h*w) - q = q.permute(0,2,1) # b,hw,c - k = k.reshape(b,c,h*w) # b,c,hw - w_ = torch.bmm(q,k) # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j] - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b,c,h*w) - w_ = w_.permute(0,2,1) # b,hw,hw (first hw of k, second of q) - h_ = torch.bmm(v,w_) # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j] - h_ = h_.reshape(b,c,h,w) - - h_ = self.proj_out(h_) - - return x+h_ - -class MemoryEfficientAttnBlock(nn.Module): - """ - Uses xformers efficient implementation, - see https://github.com/MatthieuTPHR/diffusers/blob/d80b531ff8060ec1ea982b65a1b8df70f73aa67c/src/diffusers/models/attention.py#L223 - Note: this is a single-head self-attention operation - """ - # - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.attention_op: Optional[Any] = None - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - B, C, H, W = q.shape - q, k, v = map(lambda x: rearrange(x, 'b c h w -> b (h w) c'), (q, k, v)) - - q, k, v = map( - lambda t: t.unsqueeze(3) - .reshape(B, t.shape[1], 1, C) - .permute(0, 2, 1, 3) - .reshape(B * 1, t.shape[1], C) - .contiguous(), - (q, k, v), - ) - out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op) - - out = ( - out.unsqueeze(0) - .reshape(B, 1, out.shape[1], C) - .permute(0, 2, 1, 3) - .reshape(B, out.shape[1], C) - ) - out = rearrange(out, 'b (h w) c -> b c h w', b=B, h=H, w=W, c=C) - out = self.proj_out(out) - return x+out - - -class MemoryEfficientCrossAttentionWrapper(MemoryEfficientCrossAttention): - def forward(self, x, context=None, mask=None): - b, c, h, w = x.shape - x = rearrange(x, 'b c h w -> b (h w) c') - out = super().forward(x, context=context, mask=mask) - out = rearrange(out, 'b (h w) c -> b c h w', h=h, w=w, c=c) - return x + out - - -def make_attn(in_channels, attn_type="vanilla", attn_kwargs=None): - assert attn_type in ["vanilla", "vanilla-xformers", "memory-efficient-cross-attn", "linear", "none"], f'attn_type {attn_type} unknown' - if XFORMERS_IS_AVAILBLE and attn_type == "vanilla": - attn_type = "vanilla-xformers" - print(f"making attention of type '{attn_type}' with {in_channels} in_channels") - if attn_type == "vanilla": - assert attn_kwargs is None - return AttnBlock(in_channels) - elif attn_type == "vanilla-xformers": - print(f"building MemoryEfficientAttnBlock with {in_channels} in_channels...") - return MemoryEfficientAttnBlock(in_channels) - elif attn_type == "memory-efficient-cross-attn": - attn_kwargs["query_dim"] = in_channels - return MemoryEfficientCrossAttentionWrapper(**attn_kwargs) - elif attn_type == "none": - return nn.Identity(in_channels) - else: - raise NotImplementedError() - - -class Model(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, use_timestep=True, use_linear_attn=False, attn_type="vanilla"): - super().__init__() - if use_linear_attn: attn_type = "linear" - self.ch = ch - self.temb_ch = self.ch*4 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - self.use_timestep = use_timestep - if self.use_timestep: - # timestep embedding - self.temb = nn.Module() - self.temb.dense = nn.ModuleList([ - torch.nn.Linear(self.ch, - self.temb_ch), - torch.nn.Linear(self.temb_ch, - self.temb_ch), - ]) - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - skip_in = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - if i_block == self.num_res_blocks: - skip_in = ch*in_ch_mult[i_level] - block.append(ResnetBlock(in_channels=block_in+skip_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x, t=None, context=None): - #assert x.shape[2] == x.shape[3] == self.resolution - if context is not None: - # assume aligned context, cat along channel axis - x = torch.cat((x, context), dim=1) - if self.use_timestep: - # timestep embedding - assert t is not None - temb = get_timestep_embedding(t, self.ch) - temb = self.temb.dense[0](temb) - temb = nonlinearity(temb) - temb = self.temb.dense[1](temb) - else: - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block]( - torch.cat([h, hs.pop()], dim=1), temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - def get_last_layer(self): - return self.conv_out.weight - - -class Encoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, double_z=True, use_linear_attn=False, attn_type="vanilla", - **ignore_kwargs): - super().__init__() - if use_linear_attn: attn_type = "linear" - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.in_ch_mult = in_ch_mult - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - 2*z_channels if double_z else z_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - # timestep embedding - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Decoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, give_pre_end=False, tanh_out=False, use_linear_attn=False, - attn_type="vanilla", **ignorekwargs): - super().__init__() - if use_linear_attn: attn_type = "linear" - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - self.give_pre_end = give_pre_end - self.tanh_out = tanh_out - - # compute in_ch_mult, block_in and curr_res at lowest res - in_ch_mult = (1,)+tuple(ch_mult) - block_in = ch*ch_mult[self.num_resolutions-1] - curr_res = resolution // 2**(self.num_resolutions-1) - self.z_shape = (1,z_channels,curr_res,curr_res) - print("Working with z of shape {} = {} dimensions.".format( - self.z_shape, np.prod(self.z_shape))) - - # z to block_in - self.conv_in = torch.nn.Conv2d(z_channels, - block_in, - kernel_size=3, - stride=1, - padding=1) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, z): - #assert z.shape[1:] == self.z_shape[1:] - self.last_z_shape = z.shape - - # timestep embedding - temb = None - - # z to block_in - h = self.conv_in(z) - - # middle - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block](h, temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - if self.give_pre_end: - return h - - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - if self.tanh_out: - h = torch.tanh(h) - return h - - -class SimpleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, *args, **kwargs): - super().__init__() - self.model = nn.ModuleList([nn.Conv2d(in_channels, in_channels, 1), - ResnetBlock(in_channels=in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=2 * in_channels, - out_channels=4 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=4 * in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - nn.Conv2d(2*in_channels, in_channels, 1), - Upsample(in_channels, with_conv=True)]) - # end - self.norm_out = Normalize(in_channels) - self.conv_out = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - for i, layer in enumerate(self.model): - if i in [1,2,3]: - x = layer(x, None) - else: - x = layer(x) - - h = self.norm_out(x) - h = nonlinearity(h) - x = self.conv_out(h) - return x - - -class UpsampleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, ch, num_res_blocks, resolution, - ch_mult=(2,2), dropout=0.0): - super().__init__() - # upsampling - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - block_in = in_channels - curr_res = resolution // 2 ** (self.num_resolutions - 1) - self.res_blocks = nn.ModuleList() - self.upsample_blocks = nn.ModuleList() - for i_level in range(self.num_resolutions): - res_block = [] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - res_block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - self.res_blocks.append(nn.ModuleList(res_block)) - if i_level != self.num_resolutions - 1: - self.upsample_blocks.append(Upsample(block_in, True)) - curr_res = curr_res * 2 - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - # upsampling - h = x - for k, i_level in enumerate(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.res_blocks[i_level][i_block](h, None) - if i_level != self.num_resolutions - 1: - h = self.upsample_blocks[k](h) - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class LatentRescaler(nn.Module): - def __init__(self, factor, in_channels, mid_channels, out_channels, depth=2): - super().__init__() - # residual block, interpolate, residual block - self.factor = factor - self.conv_in = nn.Conv2d(in_channels, - mid_channels, - kernel_size=3, - stride=1, - padding=1) - self.res_block1 = nn.ModuleList([ResnetBlock(in_channels=mid_channels, - out_channels=mid_channels, - temb_channels=0, - dropout=0.0) for _ in range(depth)]) - self.attn = AttnBlock(mid_channels) - self.res_block2 = nn.ModuleList([ResnetBlock(in_channels=mid_channels, - out_channels=mid_channels, - temb_channels=0, - dropout=0.0) for _ in range(depth)]) - - self.conv_out = nn.Conv2d(mid_channels, - out_channels, - kernel_size=1, - ) - - def forward(self, x): - x = self.conv_in(x) - for block in self.res_block1: - x = block(x, None) - x = torch.nn.functional.interpolate(x, size=(int(round(x.shape[2]*self.factor)), int(round(x.shape[3]*self.factor)))) - x = self.attn(x) - for block in self.res_block2: - x = block(x, None) - x = self.conv_out(x) - return x - - -class MergedRescaleEncoder(nn.Module): - def __init__(self, in_channels, ch, resolution, out_ch, num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, - ch_mult=(1,2,4,8), rescale_factor=1.0, rescale_module_depth=1): - super().__init__() - intermediate_chn = ch * ch_mult[-1] - self.encoder = Encoder(in_channels=in_channels, num_res_blocks=num_res_blocks, ch=ch, ch_mult=ch_mult, - z_channels=intermediate_chn, double_z=False, resolution=resolution, - attn_resolutions=attn_resolutions, dropout=dropout, resamp_with_conv=resamp_with_conv, - out_ch=None) - self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=intermediate_chn, - mid_channels=intermediate_chn, out_channels=out_ch, depth=rescale_module_depth) - - def forward(self, x): - x = self.encoder(x) - x = self.rescaler(x) - return x - - -class MergedRescaleDecoder(nn.Module): - def __init__(self, z_channels, out_ch, resolution, num_res_blocks, attn_resolutions, ch, ch_mult=(1,2,4,8), - dropout=0.0, resamp_with_conv=True, rescale_factor=1.0, rescale_module_depth=1): - super().__init__() - tmp_chn = z_channels*ch_mult[-1] - self.decoder = Decoder(out_ch=out_ch, z_channels=tmp_chn, attn_resolutions=attn_resolutions, dropout=dropout, - resamp_with_conv=resamp_with_conv, in_channels=None, num_res_blocks=num_res_blocks, - ch_mult=ch_mult, resolution=resolution, ch=ch) - self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=z_channels, mid_channels=tmp_chn, - out_channels=tmp_chn, depth=rescale_module_depth) - - def forward(self, x): - x = self.rescaler(x) - x = self.decoder(x) - return x - - -class Upsampler(nn.Module): - def __init__(self, in_size, out_size, in_channels, out_channels, ch_mult=2): - super().__init__() - assert out_size >= in_size - num_blocks = int(np.log2(out_size//in_size))+1 - factor_up = 1.+ (out_size % in_size) - print(f"Building {self.__class__.__name__} with in_size: {in_size} --> out_size {out_size} and factor {factor_up}") - self.rescaler = LatentRescaler(factor=factor_up, in_channels=in_channels, mid_channels=2*in_channels, - out_channels=in_channels) - self.decoder = Decoder(out_ch=out_channels, resolution=out_size, z_channels=in_channels, num_res_blocks=2, - attn_resolutions=[], in_channels=None, ch=in_channels, - ch_mult=[ch_mult for _ in range(num_blocks)]) - - def forward(self, x): - x = self.rescaler(x) - x = self.decoder(x) - return x - - -class Resize(nn.Module): - def __init__(self, in_channels=None, learned=False, mode="bilinear"): - super().__init__() - self.with_conv = learned - self.mode = mode - if self.with_conv: - print(f"Note: {self.__class__.__name} uses learned downsampling and will ignore the fixed {mode} mode") - raise NotImplementedError() - assert in_channels is not None - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=4, - stride=2, - padding=1) - - def forward(self, x, scale_factor=1.0): - if scale_factor==1.0: - return x - else: - x = torch.nn.functional.interpolate(x, mode=self.mode, align_corners=False, scale_factor=scale_factor) - return x diff --git a/spaces/AIGText/GlyphControl/ldm/modules/encoders/modules.py b/spaces/AIGText/GlyphControl/ldm/modules/encoders/modules.py deleted file mode 100644 index 0a4c77b8c77cf847b5cf0a330ea81f47adb3391d..0000000000000000000000000000000000000000 --- a/spaces/AIGText/GlyphControl/ldm/modules/encoders/modules.py +++ /dev/null @@ -1,459 +0,0 @@ -import torch -import torch.nn as nn -from torch.utils.checkpoint import checkpoint - -from transformers import T5Tokenizer, T5EncoderModel, CLIPTokenizer, CLIPTextModel, T5ForConditionalGeneration, AutoTokenizer, ByT5Tokenizer -from transformers import AutoProcessor, CLIPVisionModel -import open_clip -from ldm.util import default, count_params, islistortuple -from transformers import PreTrainedTokenizerBase -from ldm.modules.diffusionmodules.util import zero_module, identity_init_fc -class AbstractEncoder(nn.Module): - def __init__(self): - super().__init__() - - def encode(self, *args, **kwargs): - raise NotImplementedError - - -class IdentityEncoder(AbstractEncoder): - - def encode(self, x): - return x - - -class ClassEmbedder(nn.Module): - def __init__(self, embed_dim, n_classes=1000, key='class', ucg_rate=0.1): - super().__init__() - self.key = key - self.embedding = nn.Embedding(n_classes, embed_dim) - self.n_classes = n_classes - self.ucg_rate = ucg_rate - - def forward(self, batch, key=None, disable_dropout=False): - if key is None: - key = self.key - # this is for use in crossattn - c = batch[key][:, None] - if self.ucg_rate > 0. and not disable_dropout: - mask = 1. - torch.bernoulli(torch.ones_like(c) * self.ucg_rate) - c = mask * c + (1-mask) * torch.ones_like(c)*(self.n_classes-1) - c = c.long() - c = self.embedding(c) - return c - - def get_unconditional_conditioning(self, bs, device="cuda"): - uc_class = self.n_classes - 1 # 1000 classes --> 0 ... 999, one extra class for ucg (class 1000) - uc = torch.ones((bs,), device=device) * uc_class - uc = {self.key: uc} - return uc - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -class FrozenT5Embedder_old(AbstractEncoder): - """Uses the T5 transformer encoder for text""" - def __init__(self, version="google/t5-v1_1-large", device="cuda", max_length=77, freeze=True): # others are google/t5-v1_1-xl and google/t5-v1_1-xxl - super().__init__() - self.tokenizer = T5Tokenizer.from_pretrained(version) - self.transformer = T5EncoderModel.from_pretrained(version) - self.device = device - self.max_length = max_length # TODO: typical value? - if freeze: - self.freeze() - - def freeze(self): - self.transformer = self.transformer.eval() - #self.train = disabled_train - for param in self.parameters(): - param.requires_grad = False - - def forward(self, text): - batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True, - return_overflowing_tokens=False, padding="max_length", return_tensors="pt") - tokens = batch_encoding["input_ids"].to(self.device) - outputs = self.transformer(input_ids=tokens) - - z = outputs.last_hidden_state - return z - - def encode(self, text): - return self(text) - -class FrozenT5Embedder(AbstractEncoder): - """Uses the T5/ByT5 transformer encoder for text""" - def __init__(self, version="google/t5-v1_1-large", device="cuda", max_length=77, freeze=True, padding="max_length"): - # version: others for T5 are google/t5-v1_1-xl, google/t5-v1_1-xxl, google/t5-v1_1-small, google/t5-v1_1-base and google/t5-v1_1-large - # for ByT5 are google/byt5-small, google/byt5-base, google/byt5-large, google/byt5-xl and google/byt5-xxl - # padding: "max_length" or "longest" - # https://huggingface.co/docs/transformers/v4.24.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase - super().__init__() - self.tokenizer = T5Tokenizer.from_pretrained(version) if "byt5" not in version else ByT5Tokenizer.from_pretrained(version) - self.transformer = T5EncoderModel.from_pretrained(version) - self.device = device - self.max_length = max_length # TODO: typical value? - self.padding = padding - if freeze: - self.freeze() - - def freeze(self): - self.transformer = self.transformer.eval() - #self.train = disabled_train - for param in self.parameters(): - param.requires_grad = False - - def forward(self, text): - batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True, - return_overflowing_tokens=False, padding=self.padding, return_tensors="pt") - tokens = batch_encoding["input_ids"].to(self.device) - outputs = self.transformer(input_ids=tokens) - - z = outputs.last_hidden_state - return z - - def encode(self, text): - return self(text) - -class FrozenCLIPEmbedder(AbstractEncoder): - """Uses the CLIP transformer encoder for text (from huggingface)""" - LAYERS = [ - "last", - "pooled", - "hidden" - ] - def __init__(self, version="openai/clip-vit-large-patch14", device="cuda", max_length=77, - freeze=True, layer="last", layer_idx=None): # clip-vit-base-patch32 - super().__init__() - assert layer in self.LAYERS - self.tokenizer = CLIPTokenizer.from_pretrained(version) - self.transformer = CLIPTextModel.from_pretrained(version) - self.device = device - self.max_length = max_length - if freeze: - self.freeze() - self.layer = layer - self.layer_idx = layer_idx - if layer == "hidden": - assert layer_idx is not None - assert 0 <= abs(layer_idx) <= 12 - - def freeze(self): - self.transformer = self.transformer.eval() - #self.train = disabled_train - for param in self.parameters(): - param.requires_grad = False - - def forward(self, text): - batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True, - return_overflowing_tokens=False, padding="max_length", return_tensors="pt") - tokens = batch_encoding["input_ids"].to(self.device) - outputs = self.transformer(input_ids=tokens, output_hidden_states=self.layer=="hidden") - if self.layer == "last": - z = outputs.last_hidden_state - elif self.layer == "pooled": - z = outputs.pooler_output[:, None, :] - else: - z = outputs.hidden_states[self.layer_idx] - return z - - def encode(self, text): - return self(text) - - -class FrozenOpenCLIPEmbedder(AbstractEncoder): - """ - Uses the OpenCLIP transformer encoder for text - """ - LAYERS = [ - #"pooled", - "last", - "penultimate" - ] - def __init__(self, arch="ViT-H-14", version="laion2b_s32b_b79k", device="cuda", max_length=77, - freeze=True, layer="last"): - super().__init__() - assert layer in self.LAYERS - print("Start initializing the CLIP text encoder") - model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version) - print("Initialization ends") - # aa = model.encode_image(torch.zeros((1, 3,224,224))) - del model.visual - self.model = model - - if not torch.cuda.is_available(): - self.device = "cpu" - else: - self.device = device - - self.max_length = max_length - if freeze: - self.freeze() - self.layer = layer - if self.layer == "last": - self.layer_idx = 0 - elif self.layer == "penultimate": - self.layer_idx = 1 - else: - raise NotImplementedError() - - def freeze(self): - self.model = self.model.eval() - for param in self.parameters(): - param.requires_grad = False - - def forward(self, text): - tokens = open_clip.tokenize(text) - z = self.encode_with_transformer(tokens.to(self.device)) - return z - - def encode_with_transformer(self, text): - x = self.model.token_embedding(text) # [batch_size, n_ctx, d_model] - x = x + self.model.positional_embedding - x = x.permute(1, 0, 2) # NLD -> LND - x = self.text_transformer_forward(x, attn_mask=self.model.attn_mask) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.model.ln_final(x) - # did not do: - # x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.model.text_projection - # x = F.normalize(x, dim=-1) if normalize else x - return x - - def text_transformer_forward(self, x: torch.Tensor, attn_mask = None): - for i, r in enumerate(self.model.transformer.resblocks): - if i == len(self.model.transformer.resblocks) - self.layer_idx: - break - if self.model.transformer.grad_checkpointing and not torch.jit.is_scripting(): - x = checkpoint(r, x, attn_mask) - else: - x = r(x, attn_mask=attn_mask) - return x - - def encode(self, text): - return self(text) - -class FrozenOpenCLIPSepEncoder(FrozenOpenCLIPEmbedder): - def forward(self, text): - if islistortuple(text) and len(text) > 0 and islistortuple(text[0]): - z_list = [] - for ti in text: - tokens = open_clip.tokenize(ti) - z = self.encode_with_transformer(tokens.to(self.device)) - z_list.append(z) - return z_list - else: - tokens = open_clip.tokenize(text) - z = self.encode_with_transformer(tokens.to(self.device)) - return z - - -class FrozenCLIPT5Encoder(AbstractEncoder): - def __init__(self, - clip_version="openai/clip-vit-large-patch14", clip_max_length=77, layer="last", layer_idx=None, - t5_version="google/t5-v1_1-xl", t5_max_length=77, padding="max_length", - freeze=True, device="cuda"): - super().__init__() - self.clip_encoder = FrozenCLIPEmbedder( - clip_version, device, max_length=clip_max_length, freeze=freeze, layer=layer, layer_idx=layer_idx - ) - self.t5_encoder = FrozenT5Embedder( - t5_version, device, max_length=t5_max_length, freeze=freeze, padding=padding - ) - print(f"{self.clip_encoder.__class__.__name__} has {count_params(self.clip_encoder)*1.e-6:.2f} M parameters, " - f"{self.t5_encoder.__class__.__name__} comes with {count_params(self.t5_encoder)*1.e-6:.2f} M params.") - - def encode(self, text): - return self(text) - - def forward(self, text): - clip_z = self.clip_encoder.encode(text) - t5_z = self.t5_encoder.encode(text) - return [clip_z, t5_z] - -class FrozenOpenCLIPT5Encoder(AbstractEncoder): - def __init__(self, - arch="ViT-H-14", clip_version="laion2b_s32b_b79k", layer="last", clip_max_length=77, - t5_version="google/t5-v1_1-small", t5_max_length=77, padding="max_length", - device="cuda", freeze=True): - super().__init__() - self.clip_encoder = FrozenOpenCLIPEmbedder( - arch=arch, version=clip_version, device=device, max_length=clip_max_length, - freeze=freeze, layer=layer - ) - self.t5_encoder = FrozenT5Embedder( - t5_version, device, max_length=t5_max_length, freeze=freeze, padding=padding - ) - print(f"{self.clip_encoder.__class__.__name__} has {count_params(self.clip_encoder)*1.e-6:.2f} M parameters, " - f"{self.t5_encoder.__class__.__name__} comes with {count_params(self.t5_encoder)*1.e-6:.2f} M params.") - - def encode(self, text): - return self(text) - - def forward(self, text): - clip_z = self.clip_encoder.encode(text) #B*77*1024 - t5_z = self.t5_encoder.encode(text) #B*77*Z - return [clip_z, t5_z] - -class FrozenOpenCLIPT5SepEncoder(FrozenOpenCLIPT5Encoder): - def forward(self, text): - if islistortuple(text) and len(text) > 0 and islistortuple(text[0]): - assert len(text) == 2 - print("two separate input prompts") - clip_z = self.clip_encoder.encode(text[0]) #B*77*1024 - t5_z = self.t5_encoder.encode(text[1]) #B*77*Z - else: - clip_z = self.clip_encoder.encode(text) #B*77*1024 - t5_z = self.t5_encoder.encode(text) #B*77*Z - return [clip_z, t5_z] - -class MergeTextEmb(nn.Module): - def __init__(self, clip_emb_dim, t5_emb_dim, out_emb_dim=None, trainable=True, merge_mode="add", t5_fc_init="zero"): - super().__init__() - out_emb_dim = default(out_emb_dim, clip_emb_dim) - self.clip_fc = identity_init_fc(nn.Linear(clip_emb_dim, out_emb_dim)) - if t5_fc_init == "zero": - self.t5_fc = zero_module(nn.Linear(t5_emb_dim, out_emb_dim)) - elif t5_fc_init == "identity": - self.t5_fc = identity_init_fc(nn.Linear(t5_emb_dim, out_emb_dim)) - else: - "The initialization way {} is not supported.".format(t5_fc_init) - raise ValueError - self.trainable = trainable - self.merge_mode = merge_mode - - def forward(self, clip_emb, t5_emb): - clip_out = self.clip_fc(clip_emb) - t5_out = self.t5_fc(t5_emb) - if self.merge_mode == "concat": - merge_out = torch.cat([clip_out, t5_out], dim=1) - elif self.merge_mode == "add": - assert clip_out.shape == t5_out.shape - merge_out = clip_out + t5_out - else: - print("invalid merging way: {}".format(self.merge_mode)) - raise ValueError - return merge_out - - -class TransTextEmb(nn.Module): - def __init__(self, unet_context_dim, emb_dims, fc_inits=None, trans_trainable = None): - super().__init__() - # assert isinstance(emb_dims, list) - emb_num = len(emb_dims) - if fc_inits is not None: - # assert isinstance(fc_inits, list) and - assert len(fc_inits) == emb_num - else: - fc_inits = ["random" for i in range(emb_num)] - - if trans_trainable is not None: - # assert isinstance(trans_trainable, list) and - assert len(trans_trainable) == emb_num - else: - trans_trainable = [True for i in range(emb_num)] - - module_list = nn.ModuleList([]) - for i in range(emb_num): - trans = nn.Linear(emb_dims[i], unet_context_dim) - if fc_inits[i] == "zero": - trans = zero_module(trans) - elif fc_inits[i] == "identity": - trans = identity_init_fc(trans) - module_list.append(trans) - - self.trans_list = module_list - self.trans_trainable = trans_trainable - self.emb_num = emb_num - - def forward(self, emb_list): - assert len(emb_list) == self.emb_num - emb_out_list = [] - for i in range(self.emb_num): - emb_out = self.trans_list[i](emb_list[i]) - emb_out_list.append(emb_out) - return emb_out_list - - -class FrozenOpenCLIPT5ByT5Encoder(AbstractEncoder): - def __init__(self, - arch="ViT-H-14", clip_version="laion2b_s32b_b79k", layer="last", clip_max_length=77, - t5_version="google/t5-v1_1-large", t5_max_length=77, padding="max_length", - byt5_version="google/byt5-large", byt5_max_length=77, byt5_padding="max_length", - device="cuda", freeze=True): - super().__init__() - self.clip_encoder = FrozenOpenCLIPEmbedder( - arch=arch, version=clip_version, device=device, max_length=clip_max_length, - freeze=freeze, layer=layer - ) - self.t5_encoder = FrozenT5Embedder( - t5_version, device, max_length=t5_max_length, freeze=freeze, padding=padding - ) - self.byt5_encoder = FrozenT5Embedder( - byt5_version, device, max_length=byt5_max_length, freeze=freeze, padding=byt5_padding - ) - print(f"{self.clip_encoder.__class__.__name__} has {count_params(self.clip_encoder)*1.e-6:.2f} M parameters, " - f"{self.t5_encoder.__class__.__name__} comes with {count_params(self.t5_encoder)*1.e-6:.2f} M params." - f"{self.byt5_encoder.__class__.__name__} comes with {count_params(self.byt5_encoder)*1.e-6:.2f} M params.") - - def encode(self, text): - return self(text) - - def forward(self, text): - clip_z = self.clip_encoder.encode(text) #B*77*1024 - t5_z = self.t5_encoder.encode(text) #B*77*Z - byt5_z = self.byt5_encoder.encode(text) - return [clip_z, t5_z, byt5_z] - - -class FrozenOpenCLIPT5ByT5SepEncoder(FrozenOpenCLIPT5ByT5Encoder): - def forward(self, text): - if islistortuple(text) and len(text) > 0 and islistortuple(text[0]): - assert len(text) <= 3 - clip_text = text[0] - t5_text = text[1] if len(text) > 1 else text[0] - byt5_text = text[-1] - else: - clip_text = text - t5_text = text - byt5_text = text - clip_z = self.clip_encoder.encode(clip_text) #B*77*1024 - t5_z = self.t5_encoder.encode(t5_text) #B*77*Z_1 - byt5_z = self.byt5_encoder.encode(byt5_text) #B*77*Z_2 - del clip_text, t5_text, byt5_text - return [clip_z, t5_z, byt5_z] - - -class OpenCLIPImageEmbedder(AbstractEncoder): - """ - Uses the OpenCLIP transformer encoder for image - """ - def __init__(self, arch="ViT-H-14", version="laion2b_s32b_b79k", device="cuda", - freeze=True, set_grad_checkpointing = True): - super().__init__() - model, preprocess_train, preprocess_val = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version) - self.image_mean = model.visual.image_mean - self.image_std = model.visual.image_std - del model.transformer - del model.token_embedding - del model.positional_embedding - del model.ln_final - del model.text_projection - del model.logit_scale - # only model.visual is left - - self.model = model - self.device = device - - if not freeze and set_grad_checkpointing: - self.model.visual.set_grad_checkpointing(True) - self.freeze_model = freeze - - def forward(self, img): - z = self.model.encode_image(img) # 2.0.2 , normalize=False) 2.7.0 - return z - - def encode(self, img): - return self(img) \ No newline at end of file diff --git a/spaces/AIZero2HeroBootcamp/FastSpeech2LinerGradioApp/app.py b/spaces/AIZero2HeroBootcamp/FastSpeech2LinerGradioApp/app.py deleted file mode 100644 index 624711103fff0eb591bc05f07ae20c47fbe03cd2..0000000000000000000000000000000000000000 --- a/spaces/AIZero2HeroBootcamp/FastSpeech2LinerGradioApp/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/facebook/fastspeech2-en-ljspeech").launch() \ No newline at end of file diff --git a/spaces/Abhay834/SY_Bot/README.md b/spaces/Abhay834/SY_Bot/README.md deleted file mode 100644 index f450007986c811a7cff38669125264146f4f9f49..0000000000000000000000000000000000000000 --- a/spaces/Abhay834/SY_Bot/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: My Genai Chatbot -emoji: 🐨 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -duplicated_from: Abhay834/my_genai_chatbot ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/GetGpt.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/GetGpt.py deleted file mode 100644 index a5de1d296a5d6abada13030ceabcd181e2f90497..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/GetGpt.py +++ /dev/null @@ -1,88 +0,0 @@ -from __future__ import annotations - -import json -import os -import uuid - -import requests -from Crypto.Cipher import AES - -from ...typing import Any, CreateResult -from ..base_provider import BaseProvider - - -class GetGpt(BaseProvider): - url = 'https://chat.getgpt.world/' - supports_stream = True - working = False - supports_gpt_35_turbo = True - - @staticmethod - def create_completion( - model: str, - messages: list[dict[str, str]], - stream: bool, **kwargs: Any) -> CreateResult: - - headers = { - 'Content-Type' : 'application/json', - 'Referer' : 'https://chat.getgpt.world/', - 'user-agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36', - } - - data = json.dumps( - { - 'messages' : messages, - 'frequency_penalty' : kwargs.get('frequency_penalty', 0), - 'max_tokens' : kwargs.get('max_tokens', 4000), - 'model' : 'gpt-3.5-turbo', - 'presence_penalty' : kwargs.get('presence_penalty', 0), - 'temperature' : kwargs.get('temperature', 1), - 'top_p' : kwargs.get('top_p', 1), - 'stream' : True, - 'uuid' : str(uuid.uuid4()) - } - ) - - res = requests.post('https://chat.getgpt.world/api/chat/stream', - headers=headers, json={'signature': _encrypt(data)}, stream=True) - - res.raise_for_status() - for line in res.iter_lines(): - if b'content' in line: - line_json = json.loads(line.decode('utf-8').split('data: ')[1]) - yield (line_json['choices'][0]['delta']['content']) - - @classmethod - @property - def params(cls): - params = [ - ('model', 'str'), - ('messages', 'list[dict[str, str]]'), - ('stream', 'bool'), - ('temperature', 'float'), - ('presence_penalty', 'int'), - ('frequency_penalty', 'int'), - ('top_p', 'int'), - ('max_tokens', 'int'), - ] - param = ', '.join([': '.join(p) for p in params]) - return f'g4f.provider.{cls.__name__} supports: ({param})' - - -def _encrypt(e: str): - t = os.urandom(8).hex().encode('utf-8') - n = os.urandom(8).hex().encode('utf-8') - r = e.encode('utf-8') - - cipher = AES.new(t, AES.MODE_CBC, n) - ciphertext = cipher.encrypt(_pad_data(r)) - - return ciphertext.hex() + t.decode('utf-8') + n.decode('utf-8') - - -def _pad_data(data: bytes) -> bytes: - block_size = AES.block_size - padding_size = block_size - len(data) % block_size - padding = bytes([padding_size] * padding_size) - - return data + padding diff --git a/spaces/AdityaVishwakarma/LiveChecker/app.py b/spaces/AdityaVishwakarma/LiveChecker/app.py deleted file mode 100644 index b1f0d326d74a480254efca194c950e15e710539b..0000000000000000000000000000000000000000 --- a/spaces/AdityaVishwakarma/LiveChecker/app.py +++ /dev/null @@ -1,61 +0,0 @@ -# Import the requests and BeautifulSoup libraries -import requests -from bs4 import BeautifulSoup -import streamlit as st - -# Define the URL of the website you want to interact with -url = 'https://www.iana.org/domains/root/db' - -st.title("LiveChecker: Live Site Validator") -st.write("This code finds and shows active websites with a specific term like google or example") - -user_input = st.text_input('Enter a website name', placeholder='google').strip() or "google" - -if st.button("Start checking"): - progress_bar = st.progress(0) - # Try to send a GET request to the URL and read the HTML content - try: - response = requests.get(url) - response.raise_for_status() - html = response.text - except requests.exceptions.RequestException as e: - st.write('Request Error:', e, url) - - if html: - # Create a Python list from domain texts using BeautifulSoup - domain_list = [link.get_text().strip() for link in BeautifulSoup(html, 'html.parser').find_all('a', href=lambda x: x and x.startswith('/domains/root/db/'))] - domain_list = [domain for domain in domain_list if domain.isascii()] - - def check_website(): - # Get the total number of domain names in the list - total = len(domain_list) - # Initialize a counter for completed domain names - count = 0 - # Loop through each domain name and check if the website is live or not - for domain in domain_list: - # Add the domain name to the base URL of yomovies website - user_input_url = 'www.' + user_input + domain - outputtext = '' - - try: - response = requests.get('https://' + user_input_url,stream=True,timeout=2) - status_code = response.status_code - outputtext = 'https://' + user_input_url - except: - # If https fails, try again with http - try: - response = requests.get('http://' + user_input_url,stream=True,timeout=2) - status_code = response.status_code - outputtext = 'http://' + user_input_url - except: - # If both fail, set status code to None - status_code = None - # Print the result based on the status code - if status_code == 200: - st.write(outputtext, 'is live ✅') - # Increment the counter by one - count += 1 - # Calculate the percentage of completion and update the progress bar value - percent = int(count / total * 100) - progress_bar.progress(percent) - check_website() \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/dataloader/logic_grid.py b/spaces/AgentVerse/agentVerse/dataloader/logic_grid.py deleted file mode 100644 index 200344d2570307f87993590f3dd255f33030575f..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/dataloader/logic_grid.py +++ /dev/null @@ -1,22 +0,0 @@ -from .dataloader import DataLoader -from . import dataloader_registry -import json -import re - - -@dataloader_registry.register("tasksolving/logic_grid/gpt-4") -class LogicGridLoader(DataLoader): - def __init__(self, path: str): - self.answer_pat = re.compile(r"#### (-?\d+)") - super().__init__(path) - - def load(self): - with open(self.path) as f: - for line in f: - line = json.loads(line) - self.examples.append( - { - "input": line["inputs"], - "answer": line["targets"][0], - } - ) diff --git a/spaces/AgentVerse/agentVerse/ui/src/constants.ts b/spaces/AgentVerse/agentVerse/ui/src/constants.ts deleted file mode 100644 index eb9cdfd032fa96bda12704a29f83704d7008392d..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/constants.ts +++ /dev/null @@ -1,3 +0,0 @@ -export const COLOR_PRIMARY = 0x4e342e; -export const COLOR_LIGHT = 0x7b5e57; -export const COLOR_DARK = 0x260e04; diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/container/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/container/Factory.js deleted file mode 100644 index 7b7e558f751e4b706d88ce38bc51e360508a10d8..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/container/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import Container from './Container.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('container', function (x, y, width, height, children) { - var gameObject = new Container(this.scene, x, y, width, height, children); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.UI.Container', Container); - -export default Container; \ No newline at end of file diff --git a/spaces/Aki004/herta-so-vits/inference/infer_tool_grad.py b/spaces/Aki004/herta-so-vits/inference/infer_tool_grad.py deleted file mode 100644 index b75af49c08e2e724839828bc419792ed580809bb..0000000000000000000000000000000000000000 --- a/spaces/Aki004/herta-so-vits/inference/infer_tool_grad.py +++ /dev/null @@ -1,160 +0,0 @@ -import hashlib -import json -import logging -import os -import time -from pathlib import Path -import io -import librosa -import maad -import numpy as np -from inference import slicer -import parselmouth -import soundfile -import torch -import torchaudio - -from hubert import hubert_model -import utils -from models import SynthesizerTrn -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) - -def resize2d_f0(x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp(np.arange(0, len(source) * target_len, len(source)) / target_len, np.arange(0, len(source)), - source) - res = np.nan_to_num(target) - return res - -def get_f0(x, p_len,f0_up_key=0): - - time_step = 160 / 16000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = parselmouth.Sound(x, 16000).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - - f0 *= pow(2, f0_up_key / 12) - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0 - -def clean_pitch(input_pitch): - num_nan = np.sum(input_pitch == 1) - if num_nan / len(input_pitch) > 0.9: - input_pitch[input_pitch != 1] = 1 - return input_pitch - - -def plt_pitch(input_pitch): - input_pitch = input_pitch.astype(float) - input_pitch[input_pitch == 1] = np.nan - return input_pitch - - -def f0_to_pitch(ff): - f0_pitch = 69 + 12 * np.log2(ff / 440) - return f0_pitch - - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - - -class VitsSvc(object): - def __init__(self): - self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - self.SVCVITS = None - self.hps = None - self.speakers = None - self.hubert_soft = utils.get_hubert_model() - - def set_device(self, device): - self.device = torch.device(device) - self.hubert_soft.to(self.device) - if self.SVCVITS != None: - self.SVCVITS.to(self.device) - - def loadCheckpoint(self, path): - self.hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json") - self.SVCVITS = SynthesizerTrn( - self.hps.data.filter_length // 2 + 1, - self.hps.train.segment_size // self.hps.data.hop_length, - **self.hps.model) - _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", self.SVCVITS, None) - _ = self.SVCVITS.eval().to(self.device) - self.speakers = self.hps.spk - - def get_units(self, source, sr): - source = source.unsqueeze(0).to(self.device) - with torch.inference_mode(): - units = self.hubert_soft.units(source) - return units - - - def get_unit_pitch(self, in_path, tran): - source, sr = torchaudio.load(in_path) - source = torchaudio.functional.resample(source, sr, 16000) - if len(source.shape) == 2 and source.shape[1] >= 2: - source = torch.mean(source, dim=0).unsqueeze(0) - soft = self.get_units(source, sr).squeeze(0).cpu().numpy() - f0_coarse, f0 = get_f0(source.cpu().numpy()[0], soft.shape[0]*2, tran) - return soft, f0 - - def infer(self, speaker_id, tran, raw_path): - speaker_id = self.speakers[speaker_id] - sid = torch.LongTensor([int(speaker_id)]).to(self.device).unsqueeze(0) - soft, pitch = self.get_unit_pitch(raw_path, tran) - f0 = torch.FloatTensor(clean_pitch(pitch)).unsqueeze(0).to(self.device) - stn_tst = torch.FloatTensor(soft) - with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0).to(self.device) - x_tst = torch.repeat_interleave(x_tst, repeats=2, dim=1).transpose(1, 2) - audio = self.SVCVITS.infer(x_tst, f0=f0, g=sid)[0,0].data.float() - return audio, audio.shape[-1] - - def inference(self,srcaudio,chara,tran,slice_db): - sampling_rate, audio = srcaudio - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - soundfile.write("tmpwav.wav", audio, 16000, format="wav") - chunks = slicer.cut("tmpwav.wav", db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio("tmpwav.wav", chunks) - audio = [] - for (slice_tag, data) in audio_data: - length = int(np.ceil(len(data) / audio_sr * self.hps.data.sampling_rate)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - _audio = np.zeros(length) - else: - out_audio, out_sr = self.infer(chara, tran, raw_path) - _audio = out_audio.cpu().numpy() - audio.extend(list(_audio)) - audio = (np.array(audio) * 32768.0).astype('int16') - return (self.hps.data.sampling_rate,audio) diff --git a/spaces/Akinade/Iris_App/app.py b/spaces/Akinade/Iris_App/app.py deleted file mode 100644 index bb585b7a5ac984ba91e07e97d70e6e9c4416a80c..0000000000000000000000000000000000000000 --- a/spaces/Akinade/Iris_App/app.py +++ /dev/null @@ -1,41 +0,0 @@ -import gradio as gr -import joblib - - - -def values(sepal_length, sepal_height, petal_length, petal_height): - model = joblib.load('iris-predictor.joblib') - action = model.predict([[sepal_length, sepal_height, petal_length, petal_height]]) - if action == 'Iris-setosa': - image_1 = 'Irissetosa1 copy.jpg' - text1 = 'This is Iris Setosa 💐' - return text1, image_1 - elif action == 'Iris-virginica': - image_2 = 'Iris_virginica_2 copy.jpg' - text2 = 'This is Iris Virginica 🌺' - return text2, image_2 - elif action == 'Iris-versicolor': - image_3 = 'iris_versicolor copy.JPG' - text3 = 'This is Iris Versicolor 🌼' - return text3, image_3 - else: - "No Picture to display for your ambiguous values" - - -sepal_l = gr.inputs.Slider(0.1, 9.9, label='Sepal-Length') -sepal_h = gr.inputs.Slider(0.1, 9.9, label='Sepal-Height') -petal_l = gr.inputs.Slider(0.1, 9.9, label='Petal-height') -petal_h = gr.inputs.Slider(0.1, 9.9, label='Petal-Length') - -output = gr.Textbox(label="Result") -output1 = gr.outputs.Image(label="Image Result") - - -app = gr.Interface(fn=values, inputs=[sepal_l, sepal_h, petal_l, petal_h], outputs=[output, output1], title='An iris flower app', - description='Input the Flower Details for Sepal and Petal Respectively.', examples=[[4.7, 3.2, 1.6, 0.2], - [6.0, 2.7, 5.1, 1.6], - [6.5, 3.0, 5.5, 1.8]], live=False, theme='huggingface') - - - -app.launch() \ No newline at end of file diff --git a/spaces/Alpaca233/SadTalker/src/facerender/sync_batchnorm/replicate.py b/spaces/Alpaca233/SadTalker/src/facerender/sync_batchnorm/replicate.py deleted file mode 100644 index b71c7b8ed51a1d6c55b1f753bdd8d90bad79bd06..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/facerender/sync_batchnorm/replicate.py +++ /dev/null @@ -1,94 +0,0 @@ -# -*- coding: utf-8 -*- -# File : replicate.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import functools - -from torch.nn.parallel.data_parallel import DataParallel - -__all__ = [ - 'CallbackContext', - 'execute_replication_callbacks', - 'DataParallelWithCallback', - 'patch_replication_callback' -] - - -class CallbackContext(object): - pass - - -def execute_replication_callbacks(modules): - """ - Execute an replication callback `__data_parallel_replicate__` on each module created by original replication. - - The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)` - - Note that, as all modules are isomorphism, we assign each sub-module with a context - (shared among multiple copies of this module on different devices). - Through this context, different copies can share some information. - - We guarantee that the callback on the master copy (the first copy) will be called ahead of calling the callback - of any slave copies. - """ - master_copy = modules[0] - nr_modules = len(list(master_copy.modules())) - ctxs = [CallbackContext() for _ in range(nr_modules)] - - for i, module in enumerate(modules): - for j, m in enumerate(module.modules()): - if hasattr(m, '__data_parallel_replicate__'): - m.__data_parallel_replicate__(ctxs[j], i) - - -class DataParallelWithCallback(DataParallel): - """ - Data Parallel with a replication callback. - - An replication callback `__data_parallel_replicate__` of each module will be invoked after being created by - original `replicate` function. - The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)` - - Examples: - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - # sync_bn.__data_parallel_replicate__ will be invoked. - """ - - def replicate(self, module, device_ids): - modules = super(DataParallelWithCallback, self).replicate(module, device_ids) - execute_replication_callbacks(modules) - return modules - - -def patch_replication_callback(data_parallel): - """ - Monkey-patch an existing `DataParallel` object. Add the replication callback. - Useful when you have customized `DataParallel` implementation. - - Examples: - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallel(sync_bn, device_ids=[0, 1]) - > patch_replication_callback(sync_bn) - # this is equivalent to - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - """ - - assert isinstance(data_parallel, DataParallel) - - old_replicate = data_parallel.replicate - - @functools.wraps(old_replicate) - def new_replicate(module, device_ids): - modules = old_replicate(module, device_ids) - execute_replication_callbacks(modules) - return modules - - data_parallel.replicate = new_replicate diff --git a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/latex/attention/model_architecture.tex b/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/latex/attention/model_architecture.tex deleted file mode 100644 index c82be6242cc9d26203360e90d3ac9184ef6ad842..0000000000000000000000000000000000000000 --- a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/latex/attention/model_architecture.tex +++ /dev/null @@ -1,155 +0,0 @@ - -\begin{figure} - \centering - \includegraphics[scale=0.6]{Figures/ModalNet-21} - \caption{The Transformer - model architecture.} - \label{fig:model-arch} -\end{figure} - -% Although the primary workhorse of our model is attention, -%Our model maintains the encoder-decoder structure that is common to many so-called sequence-to-sequence models \citep{bahdanau2014neural,sutskever14}. As in all such architectures, the encoder computes a representation of the input sequence, and the decoder consumes these representations along with the output tokens to autoregressively produce the output sequence. Where, traditionally, the encoder and decoder contain stacks of recurrent or convolutional layers, our encoder and decoder stacks are composed of attention layers and position-wise feed-forward layers (Figure~\ref{fig:model-arch}). The following sections describe the gross architecture and these particular components in detail. - -Most competitive neural sequence transduction models have an encoder-decoder structure \citep{cho2014learning,bahdanau2014neural,sutskever14}. Here, the encoder maps an input sequence of symbol representations $(x_1, ..., x_n)$ to a sequence of continuous representations $\mathbf{z} = (z_1, ..., z_n)$. Given $\mathbf{z}$, the decoder then generates an output sequence $(y_1,...,y_m)$ of symbols one element at a time. At each step the model is auto-regressive \citep{graves2013generating}, consuming the previously generated symbols as additional input when generating the next. - -The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure~\ref{fig:model-arch}, respectively. - -\subsection{Encoder and Decoder Stacks} - -\paragraph{Encoder:}The encoder is composed of a stack of $N=6$ identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection \citep{he2016deep} around each of the two sub-layers, followed by layer normalization \cite{layernorm2016}. That is, the output of each sub-layer is $\mathrm{LayerNorm}(x + \mathrm{Sublayer}(x))$, where $\mathrm{Sublayer}(x)$ is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension $\dmodel=512$. - -\paragraph{Decoder:}The decoder is also composed of a stack of $N=6$ identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$. - -% In our model (Figure~\ref{fig:model-arch}), the encoder and decoder are composed of stacks of alternating self-attention layers (for cross-positional communication) and position-wise feed-forward layers (for in-place computation). In addition, the decoder stack contains encoder-decoder attention layers. Since attention is agnostic to the distances between words, our model requires a "positional encoding" to be added to the encoder and decoder input. The following sections describe all of these components in detail. - -\subsection{Attention} \label{sec:attention} -An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. - -\subsubsection{Scaled Dot-Product Attention} \label{sec:scaled-dot-prod} - -% \begin{figure} -% \centering -% \includegraphics[scale=0.6]{Figures/ModalNet-19} -% \caption{Scaled Dot-Product Attention.} -% \label{fig:multi-head-att} -% \end{figure} - -We call our particular attention "Scaled Dot-Product Attention" (Figure~\ref{fig:multi-head-att}). The input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. We compute the dot products of the query with all keys, divide each by $\sqrt{d_k}$, and apply a softmax function to obtain the weights on the values. - -In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$. The keys and values are also packed together into matrices $K$ and $V$. We compute the matrix of outputs as: - -\begin{equation} - \mathrm{Attention}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V -\end{equation} - -The two most commonly used attention functions are additive attention \citep{bahdanau2014neural}, and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d_k}}$. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code. - -%We scale the dot products by $1/\sqrt{d_k}$ to limit the magnitude of the dot products, which works well in practice. Otherwise, we found applying the softmax to often result in weights very close to 0 or 1, and hence minuscule gradients. - -% Already described in the subsequent section -%When used as part of decoder self-attention, an optional mask function is applied just before the softmax to prevent positions from attending to subsequent positions. This mask simply sets the logits corresponding to all illegal connections (those outside of the lower triangle) to $-\infty$. - -%\paragraph{Comparison to Additive Attention: } We choose dot product attention over additive attention \citep{bahdanau2014neural} since it can be computed using highly optimized matrix multiplication code. This optimization is particularly important to us, as we employ many attention layers in our model. - -While for small values of $d_k$ the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of $d_k$ \citep{DBLP:journals/corr/BritzGLL17}. We suspect that for large values of $d_k$, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients \footnote{To illustrate why the dot products get large, assume that the components of $q$ and $k$ are independent random variables with mean $0$ and variance $1$. Then their dot product, $q \cdot k = \sum_{i=1}^{d_k} q_ik_i$, has mean $0$ and variance $d_k$.}. To counteract this effect, we scale the dot products by $\frac{1}{\sqrt{d_k}}$. - - -%We suspect this to be caused by the dot products growing too large in magnitude to result in useful gradients after applying the softmax function. To counteract this, we scale the dot product by $1/\sqrt{d_k}$. - - -\subsubsection{Multi-Head Attention} \label{sec:multihead} - -\begin{figure} -\begin{minipage}[t]{0.5\textwidth} - \centering - Scaled Dot-Product Attention \\ - \vspace{0.5cm} - \includegraphics[scale=0.6]{Figures/ModalNet-19} -\end{minipage} -\begin{minipage}[t]{0.5\textwidth} - \centering - Multi-Head Attention \\ - \vspace{0.1cm} - \includegraphics[scale=0.6]{Figures/ModalNet-20} -\end{minipage} - - - % \centering - - \caption{(left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel.} - \label{fig:multi-head-att} -\end{figure} - -Instead of performing a single attention function with $\dmodel$-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values $h$ times with different, learned linear projections to $d_k$, $d_k$ and $d_v$ dimensions, respectively. -On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding $d_v$-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure~\ref{fig:multi-head-att}. - -Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this. - -\begin{align*} - \mathrm{MultiHead}(Q, K, V) &= \mathrm{Concat}(\mathrm{head_1}, ..., \mathrm{head_h})W^O\\ -% \mathrm{where} \mathrm{head_i} &= \mathrm{Attention}(QW_Q_i^{\dmodel \times d_q}, KW_K_i^{\dmodel \times d_k}, VW^V_i^{\dmodel \times d_v})\\ - \text{where}~\mathrm{head_i} &= \mathrm{Attention}(QW^Q_i, KW^K_i, VW^V_i)\\ -\end{align*} - -Where the projections are parameter matrices $W^Q_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^K_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^V_i \in \mathbb{R}^{\dmodel \times d_v}$ and $W^O \in \mathbb{R}^{hd_v \times \dmodel}$. - - -%find it better (and no more expensive) to have multiple parallel attention layers (each over the full set of positions) with proportionally lower-dimensional keys, values and queries. We call this "Multi-Head Attention" (Figure~\ref{fig:multi-head-att}). The keys, values, and queries for each of these parallel attention layers are computed by learned linear transformations of the inputs to the multi-head attention. We use different linear transformations across different parallel attention layers. The output of the parallel attention layers are concatenated, and then passed through a final learned linear transformation. - -In this work we employ $h=8$ parallel attention layers, or heads. For each of these we use $d_k=d_v=\dmodel/h=64$. -Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. - -\subsubsection{Applications of Attention in our Model} - -The Transformer uses multi-head attention in three different ways: -\begin{itemize} - \item In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as \citep{wu2016google, bahdanau2014neural,JonasFaceNet2017}. - - \item The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. - - \item Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to $-\infty$) all values in the input of the softmax which correspond to illegal connections. See Figure~\ref{fig:multi-head-att}. - -\end{itemize} - -\subsection{Position-wise Feed-Forward Networks}\label{sec:ffn} - -In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. - -\begin{equation} - \mathrm{FFN}(x)=\max(0, xW_1 + b_1) W_2 + b_2 -\end{equation} - -While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is $\dmodel=512$, and the inner-layer has dimensionality $d_{ff}=2048$. - - - -%In the appendix, we describe how the position-wise feed-forward network can also be seen as a form of attention. - -%from Jakob: The number of operations required for the model to relate signals from two arbitrary input or output positions grows in the distance between positions in input or output, linearly for ConvS2S and logarithmically for ByteNet, making it harder to learn dependencies between these positions \citep{hochreiter2001gradient}. In the transformer this is reduced to a constant number of operations, albeit at the cost of effective resolution caused by averaging attention-weighted positions, an effect we aim to counteract with multi-headed attention. - - -%Figure~\ref{fig:simple-att} presents a simple attention function, $A$, with a single head, that forms the basis of our multi-head attention. $A$ takes a query key vector $\kq$, matrices of memory keys $\km$ and memory values $\vm$ ,and produces a query value vector $\vq$ as -%\begin{equation*} \label{eq:attention} -% A(\kq, \km, \vm) = {\vm}^T (Softmax(\km \kq). -%\end{equation*} -%We linearly transform $\kq,\,\km$, and $\vm$ with learned matrices ${\Wkq \text{,} \, \Wkm}$, and ${\Wvm}$ before calling the attention function, and transform the output query with $\Wvq$ before handing it to the feed forward layer. Each attention layer has it's own set of transformation matrices, which are shared across all query positions. $A$ is applied in parallel for each query position, and is implemented very efficiently as a batch of matrix multiplies. The self-attention and encoder-decoder attention layers use $A$, but with different arguments. For example, in encdoder self-attention, queries in encoder layer $i$ attention to memories in encoder layer $i-1$. To ensure that decoder self-attention layers do not look at future words, we add $- \inf$ to the softmax logits in positions $j+1$ to query length for query position $l$. - -%In simple attention, the query value is a weighted combination of the memory values where the attention weights sum to one. Although this function performs well in practice, the constraint on attention weights can restrict the amount of information that flows from memories to queries because the query cannot focus on multiple memory positions at once, which might be desirable when translating long sequences. \marginpar{@usz, could you think of an example of this ?} We remedy this by maintaining multiple attention heads at each query position that attend to all memory positions in parallel, with a different set of parameters per attention head $h$. -%\marginpar{} - -\subsection{Embeddings and Softmax} -Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension $\dmodel$. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to \citep{press2016using}. In the embedding layers, we multiply those weights by $\sqrt{\dmodel}$. - - -\subsection{Positional Encoding} -Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $\dmodel$ as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed \citep{JonasFaceNet2017}. - -In this work, we use sine and cosine functions of different frequencies: - -\begin{align*} - PE_{(pos,2i)} = sin(pos / 10000^{2i/\dmodel}) \\ - PE_{(pos,2i+1)} = cos(pos / 10000^{2i/\dmodel}) -\end{align*} - -where $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\pi$ to $10000 \cdot 2\pi$. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$. - -We also experimented with using learned positional embeddings \citep{JonasFaceNet2017} instead, and found that the two versions produced nearly identical results (see Table~\ref{tab:variations} row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/stylegan2/op/upfirdn2d.cpp b/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/stylegan2/op/upfirdn2d.cpp deleted file mode 100644 index d2e633dc896433c205e18bc3e455539192ff968e..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/stylegan2/op/upfirdn2d.cpp +++ /dev/null @@ -1,23 +0,0 @@ -#include - - -torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1) { - CHECK_CUDA(input); - CHECK_CUDA(kernel); - - return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)"); -} \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/models/unet2d-cond.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/models/unet2d-cond.md deleted file mode 100644 index a669b02a7fe82049ddb45b2286710a7d1f8d4bdf..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/models/unet2d-cond.md +++ /dev/null @@ -1,19 +0,0 @@ -# UNet2DConditionModel - -The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 2D UNet conditional model. - -The abstract from the paper is: - -*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.* - -## UNet2DConditionModel -[[autodoc]] UNet2DConditionModel - -## UNet2DConditionOutput -[[autodoc]] models.unet_2d_condition.UNet2DConditionOutput - -## FlaxUNet2DConditionModel -[[autodoc]] models.unet_2d_condition_flax.FlaxUNet2DConditionModel - -## FlaxUNet2DConditionOutput -[[autodoc]] models.unet_2d_condition_flax.FlaxUNet2DConditionOutput \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/mixture_tiling.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/mixture_tiling.py deleted file mode 100644 index 3e701cf607f55752543683aa7c7bf8615649aff7..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/mixture_tiling.py +++ /dev/null @@ -1,405 +0,0 @@ -import inspect -from copy import deepcopy -from enum import Enum -from typing import List, Optional, Tuple, Union - -import torch -from tqdm.auto import tqdm - -from diffusers.models import AutoencoderKL, UNet2DConditionModel -from diffusers.pipeline_utils import DiffusionPipeline -from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker -from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler -from diffusers.utils import logging - - -try: - from ligo.segments import segment - from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer -except ImportError: - raise ImportError("Please install transformers and ligo-segments to use the mixture pipeline") - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> from diffusers import LMSDiscreteScheduler, DiffusionPipeline - - >>> scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000) - >>> pipeline = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler, custom_pipeline="mixture_tiling") - >>> pipeline.to("cuda") - - >>> image = pipeline( - >>> prompt=[[ - >>> "A charming house in the countryside, by jakub rozalski, sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece", - >>> "A dirt road in the countryside crossing pastures, by jakub rozalski, sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece", - >>> "An old and rusty giant robot lying on a dirt road, by jakub rozalski, dark sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece" - >>> ]], - >>> tile_height=640, - >>> tile_width=640, - >>> tile_row_overlap=0, - >>> tile_col_overlap=256, - >>> guidance_scale=8, - >>> seed=7178915308, - >>> num_inference_steps=50, - >>> )["images"][0] - ``` -""" - - -def _tile2pixel_indices(tile_row, tile_col, tile_width, tile_height, tile_row_overlap, tile_col_overlap): - """Given a tile row and column numbers returns the range of pixels affected by that tiles in the overall image - - Returns a tuple with: - - Starting coordinates of rows in pixel space - - Ending coordinates of rows in pixel space - - Starting coordinates of columns in pixel space - - Ending coordinates of columns in pixel space - """ - px_row_init = 0 if tile_row == 0 else tile_row * (tile_height - tile_row_overlap) - px_row_end = px_row_init + tile_height - px_col_init = 0 if tile_col == 0 else tile_col * (tile_width - tile_col_overlap) - px_col_end = px_col_init + tile_width - return px_row_init, px_row_end, px_col_init, px_col_end - - -def _pixel2latent_indices(px_row_init, px_row_end, px_col_init, px_col_end): - """Translates coordinates in pixel space to coordinates in latent space""" - return px_row_init // 8, px_row_end // 8, px_col_init // 8, px_col_end // 8 - - -def _tile2latent_indices(tile_row, tile_col, tile_width, tile_height, tile_row_overlap, tile_col_overlap): - """Given a tile row and column numbers returns the range of latents affected by that tiles in the overall image - - Returns a tuple with: - - Starting coordinates of rows in latent space - - Ending coordinates of rows in latent space - - Starting coordinates of columns in latent space - - Ending coordinates of columns in latent space - """ - px_row_init, px_row_end, px_col_init, px_col_end = _tile2pixel_indices( - tile_row, tile_col, tile_width, tile_height, tile_row_overlap, tile_col_overlap - ) - return _pixel2latent_indices(px_row_init, px_row_end, px_col_init, px_col_end) - - -def _tile2latent_exclusive_indices( - tile_row, tile_col, tile_width, tile_height, tile_row_overlap, tile_col_overlap, rows, columns -): - """Given a tile row and column numbers returns the range of latents affected only by that tile in the overall image - - Returns a tuple with: - - Starting coordinates of rows in latent space - - Ending coordinates of rows in latent space - - Starting coordinates of columns in latent space - - Ending coordinates of columns in latent space - """ - row_init, row_end, col_init, col_end = _tile2latent_indices( - tile_row, tile_col, tile_width, tile_height, tile_row_overlap, tile_col_overlap - ) - row_segment = segment(row_init, row_end) - col_segment = segment(col_init, col_end) - # Iterate over the rest of tiles, clipping the region for the current tile - for row in range(rows): - for column in range(columns): - if row != tile_row and column != tile_col: - clip_row_init, clip_row_end, clip_col_init, clip_col_end = _tile2latent_indices( - row, column, tile_width, tile_height, tile_row_overlap, tile_col_overlap - ) - row_segment = row_segment - segment(clip_row_init, clip_row_end) - col_segment = col_segment - segment(clip_col_init, clip_col_end) - # return row_init, row_end, col_init, col_end - return row_segment[0], row_segment[1], col_segment[0], col_segment[1] - - -class StableDiffusionExtrasMixin: - """Mixin providing additional convenience method to Stable Diffusion pipelines""" - - def decode_latents(self, latents, cpu_vae=False): - """Decodes a given array of latents into pixel space""" - # scale and decode the image latents with vae - if cpu_vae: - lat = deepcopy(latents).cpu() - vae = deepcopy(self.vae).cpu() - else: - lat = latents - vae = self.vae - - lat = 1 / 0.18215 * lat - image = vae.decode(lat).sample - - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - - return self.numpy_to_pil(image) - - -class StableDiffusionTilingPipeline(DiffusionPipeline, StableDiffusionExtrasMixin): - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[DDIMScheduler, PNDMScheduler], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - ): - super().__init__() - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - - class SeedTilesMode(Enum): - """Modes in which the latents of a particular tile can be re-seeded""" - - FULL = "full" - EXCLUSIVE = "exclusive" - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[List[str]]], - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - eta: Optional[float] = 0.0, - seed: Optional[int] = None, - tile_height: Optional[int] = 512, - tile_width: Optional[int] = 512, - tile_row_overlap: Optional[int] = 256, - tile_col_overlap: Optional[int] = 256, - guidance_scale_tiles: Optional[List[List[float]]] = None, - seed_tiles: Optional[List[List[int]]] = None, - seed_tiles_mode: Optional[Union[str, List[List[str]]]] = "full", - seed_reroll_regions: Optional[List[Tuple[int, int, int, int, int]]] = None, - cpu_vae: Optional[bool] = False, - ): - r""" - Function to run the diffusion pipeline with tiling support. - - Args: - prompt: either a single string (no tiling) or a list of lists with all the prompts to use (one list for each row of tiles). This will also define the tiling structure. - num_inference_steps: number of diffusions steps. - guidance_scale: classifier-free guidance. - seed: general random seed to initialize latents. - tile_height: height in pixels of each grid tile. - tile_width: width in pixels of each grid tile. - tile_row_overlap: number of overlap pixels between tiles in consecutive rows. - tile_col_overlap: number of overlap pixels between tiles in consecutive columns. - guidance_scale_tiles: specific weights for classifier-free guidance in each tile. - guidance_scale_tiles: specific weights for classifier-free guidance in each tile. If None, the value provided in guidance_scale will be used. - seed_tiles: specific seeds for the initialization latents in each tile. These will override the latents generated for the whole canvas using the standard seed parameter. - seed_tiles_mode: either "full" "exclusive". If "full", all the latents affected by the tile be overriden. If "exclusive", only the latents that are affected exclusively by this tile (and no other tiles) will be overrriden. - seed_reroll_regions: a list of tuples in the form (start row, end row, start column, end column, seed) defining regions in pixel space for which the latents will be overriden using the given seed. Takes priority over seed_tiles. - cpu_vae: the decoder from latent space to pixel space can require too mucho GPU RAM for large images. If you find out of memory errors at the end of the generation process, try setting this parameter to True to run the decoder in CPU. Slower, but should run without memory issues. - - Examples: - - Returns: - A PIL image with the generated image. - - """ - if not isinstance(prompt, list) or not all(isinstance(row, list) for row in prompt): - raise ValueError(f"`prompt` has to be a list of lists but is {type(prompt)}") - grid_rows = len(prompt) - grid_cols = len(prompt[0]) - if not all(len(row) == grid_cols for row in prompt): - raise ValueError("All prompt rows must have the same number of prompt columns") - if not isinstance(seed_tiles_mode, str) and ( - not isinstance(seed_tiles_mode, list) or not all(isinstance(row, list) for row in seed_tiles_mode) - ): - raise ValueError(f"`seed_tiles_mode` has to be a string or list of lists but is {type(prompt)}") - if isinstance(seed_tiles_mode, str): - seed_tiles_mode = [[seed_tiles_mode for _ in range(len(row))] for row in prompt] - - modes = [mode.value for mode in self.SeedTilesMode] - if any(mode not in modes for row in seed_tiles_mode for mode in row): - raise ValueError(f"Seed tiles mode must be one of {modes}") - if seed_reroll_regions is None: - seed_reroll_regions = [] - batch_size = 1 - - # create original noisy latents using the timesteps - height = tile_height + (grid_rows - 1) * (tile_height - tile_row_overlap) - width = tile_width + (grid_cols - 1) * (tile_width - tile_col_overlap) - latents_shape = (batch_size, self.unet.config.in_channels, height // 8, width // 8) - generator = torch.Generator("cuda").manual_seed(seed) - latents = torch.randn(latents_shape, generator=generator, device=self.device) - - # overwrite latents for specific tiles if provided - if seed_tiles is not None: - for row in range(grid_rows): - for col in range(grid_cols): - if (seed_tile := seed_tiles[row][col]) is not None: - mode = seed_tiles_mode[row][col] - if mode == self.SeedTilesMode.FULL.value: - row_init, row_end, col_init, col_end = _tile2latent_indices( - row, col, tile_width, tile_height, tile_row_overlap, tile_col_overlap - ) - else: - row_init, row_end, col_init, col_end = _tile2latent_exclusive_indices( - row, - col, - tile_width, - tile_height, - tile_row_overlap, - tile_col_overlap, - grid_rows, - grid_cols, - ) - tile_generator = torch.Generator("cuda").manual_seed(seed_tile) - tile_shape = (latents_shape[0], latents_shape[1], row_end - row_init, col_end - col_init) - latents[:, :, row_init:row_end, col_init:col_end] = torch.randn( - tile_shape, generator=tile_generator, device=self.device - ) - - # overwrite again for seed reroll regions - for row_init, row_end, col_init, col_end, seed_reroll in seed_reroll_regions: - row_init, row_end, col_init, col_end = _pixel2latent_indices( - row_init, row_end, col_init, col_end - ) # to latent space coordinates - reroll_generator = torch.Generator("cuda").manual_seed(seed_reroll) - region_shape = (latents_shape[0], latents_shape[1], row_end - row_init, col_end - col_init) - latents[:, :, row_init:row_end, col_init:col_end] = torch.randn( - region_shape, generator=reroll_generator, device=self.device - ) - - # Prepare scheduler - accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys()) - extra_set_kwargs = {} - if accepts_offset: - extra_set_kwargs["offset"] = 1 - self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs) - # if we use LMSDiscreteScheduler, let's make sure latents are multiplied by sigmas - if isinstance(self.scheduler, LMSDiscreteScheduler): - latents = latents * self.scheduler.sigmas[0] - - # get prompts text embeddings - text_input = [ - [ - self.tokenizer( - col, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - for col in row - ] - for row in prompt - ] - text_embeddings = [[self.text_encoder(col.input_ids.to(self.device))[0] for col in row] for row in text_input] - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 # TODO: also active if any tile has guidance scale - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - for i in range(grid_rows): - for j in range(grid_cols): - max_length = text_input[i][j].input_ids.shape[-1] - uncond_input = self.tokenizer( - [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt" - ) - uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0] - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings[i][j] = torch.cat([uncond_embeddings, text_embeddings[i][j]]) - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # Mask for tile weights strenght - tile_weights = self._gaussian_weights(tile_width, tile_height, batch_size) - - # Diffusion timesteps - for i, t in tqdm(enumerate(self.scheduler.timesteps)): - # Diffuse each tile - noise_preds = [] - for row in range(grid_rows): - noise_preds_row = [] - for col in range(grid_cols): - px_row_init, px_row_end, px_col_init, px_col_end = _tile2latent_indices( - row, col, tile_width, tile_height, tile_row_overlap, tile_col_overlap - ) - tile_latents = latents[:, :, px_row_init:px_row_end, px_col_init:px_col_end] - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([tile_latents] * 2) if do_classifier_free_guidance else tile_latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings[row][col])[ - "sample" - ] - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - guidance = ( - guidance_scale - if guidance_scale_tiles is None or guidance_scale_tiles[row][col] is None - else guidance_scale_tiles[row][col] - ) - noise_pred_tile = noise_pred_uncond + guidance * (noise_pred_text - noise_pred_uncond) - noise_preds_row.append(noise_pred_tile) - noise_preds.append(noise_preds_row) - # Stitch noise predictions for all tiles - noise_pred = torch.zeros(latents.shape, device=self.device) - contributors = torch.zeros(latents.shape, device=self.device) - # Add each tile contribution to overall latents - for row in range(grid_rows): - for col in range(grid_cols): - px_row_init, px_row_end, px_col_init, px_col_end = _tile2latent_indices( - row, col, tile_width, tile_height, tile_row_overlap, tile_col_overlap - ) - noise_pred[:, :, px_row_init:px_row_end, px_col_init:px_col_end] += ( - noise_preds[row][col] * tile_weights - ) - contributors[:, :, px_row_init:px_row_end, px_col_init:px_col_end] += tile_weights - # Average overlapping areas with more than 1 contributor - noise_pred /= contributors - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents).prev_sample - - # scale and decode the image latents with vae - image = self.decode_latents(latents, cpu_vae) - - return {"images": image} - - def _gaussian_weights(self, tile_width, tile_height, nbatches): - """Generates a gaussian mask of weights for tile contributions""" - import numpy as np - from numpy import exp, pi, sqrt - - latent_width = tile_width // 8 - latent_height = tile_height // 8 - - var = 0.01 - midpoint = (latent_width - 1) / 2 # -1 because index goes from 0 to latent_width - 1 - x_probs = [ - exp(-(x - midpoint) * (x - midpoint) / (latent_width * latent_width) / (2 * var)) / sqrt(2 * pi * var) - for x in range(latent_width) - ] - midpoint = latent_height / 2 - y_probs = [ - exp(-(y - midpoint) * (y - midpoint) / (latent_height * latent_height) / (2 * var)) / sqrt(2 * pi * var) - for y in range(latent_height) - ] - - weights = np.outer(y_probs, x_probs) - return torch.tile(torch.tensor(weights, device=self.device), (nbatches, self.unet.config.in_channels, 1, 1)) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/dcn/mask_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/dcn/mask_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py deleted file mode 100644 index cb340022ea27f563b8c4a570cf89b5f09e6434cd..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/dcn/mask_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py' -model = dict( - backbone=dict( - dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, True, True, True))) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/mask_rcnn_r50_fpn_gn_ws-all_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/mask_rcnn_r50_fpn_gn_ws-all_2x_coco.py deleted file mode 100644 index b83e7b5c7dd63658d57397cde60d8ee4c74d8376..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/mask_rcnn_r50_fpn_gn_ws-all_2x_coco.py +++ /dev/null @@ -1,17 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py' -conv_cfg = dict(type='ConvWS') -norm_cfg = dict(type='GN', num_groups=32, requires_grad=True) -model = dict( - pretrained='open-mmlab://jhu/resnet50_gn_ws', - backbone=dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg), - neck=dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg), - roi_head=dict( - bbox_head=dict( - type='Shared4Conv1FCBBoxHead', - conv_out_channels=256, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg), - mask_head=dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg))) -# learning policy -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/match_costs/builder.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/match_costs/builder.py deleted file mode 100644 index 6894017d42eb16ee4a8ae3ed660a71cda3ad9940..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/match_costs/builder.py +++ /dev/null @@ -1,8 +0,0 @@ -from mmcv.utils import Registry, build_from_cfg - -MATCH_COST = Registry('Match Cost') - - -def build_match_cost(cfg, default_args=None): - """Builder of IoU calculator.""" - return build_from_cfg(cfg, MATCH_COST, default_args) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/visualization/image.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/visualization/image.py deleted file mode 100644 index 5a148384d7a77c4d9849c54570e85740eaff8235..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/visualization/image.py +++ /dev/null @@ -1,303 +0,0 @@ -import matplotlib.pyplot as plt -import mmcv -import numpy as np -import pycocotools.mask as mask_util -from matplotlib.collections import PatchCollection -from matplotlib.patches import Polygon - -from ..utils import mask2ndarray - -EPS = 1e-2 - - -def color_val_matplotlib(color): - """Convert various input in BGR order to normalized RGB matplotlib color - tuples, - - Args: - color (:obj:`Color`/str/tuple/int/ndarray): Color inputs - - Returns: - tuple[float]: A tuple of 3 normalized floats indicating RGB channels. - """ - color = mmcv.color_val(color) - color = [color / 255 for color in color[::-1]] - return tuple(color) - - -def imshow_det_bboxes(img, - bboxes, - labels, - segms=None, - class_names=None, - score_thr=0, - bbox_color='green', - text_color='green', - mask_color=None, - thickness=2, - font_size=13, - win_name='', - show=True, - wait_time=0, - out_file=None): - """Draw bboxes and class labels (with scores) on an image. - - Args: - img (str or ndarray): The image to be displayed. - bboxes (ndarray): Bounding boxes (with scores), shaped (n, 4) or - (n, 5). - labels (ndarray): Labels of bboxes. - segms (ndarray or None): Masks, shaped (n,h,w) or None - class_names (list[str]): Names of each classes. - score_thr (float): Minimum score of bboxes to be shown. Default: 0 - bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines. - The tuple of color should be in BGR order. Default: 'green' - text_color (str or tuple(int) or :obj:`Color`):Color of texts. - The tuple of color should be in BGR order. Default: 'green' - mask_color (str or tuple(int) or :obj:`Color`, optional): - Color of masks. The tuple of color should be in BGR order. - Default: None - thickness (int): Thickness of lines. Default: 2 - font_size (int): Font size of texts. Default: 13 - show (bool): Whether to show the image. Default: True - win_name (str): The window name. Default: '' - wait_time (float): Value of waitKey param. Default: 0. - out_file (str, optional): The filename to write the image. - Default: None - - Returns: - ndarray: The image with bboxes drawn on it. - """ - assert bboxes.ndim == 2, \ - f' bboxes ndim should be 2, but its ndim is {bboxes.ndim}.' - assert labels.ndim == 1, \ - f' labels ndim should be 1, but its ndim is {labels.ndim}.' - assert bboxes.shape[0] == labels.shape[0], \ - 'bboxes.shape[0] and labels.shape[0] should have the same length.' - assert bboxes.shape[1] == 4 or bboxes.shape[1] == 5, \ - f' bboxes.shape[1] should be 4 or 5, but its {bboxes.shape[1]}.' - img = mmcv.imread(img).astype(np.uint8) - - if score_thr > 0: - assert bboxes.shape[1] == 5 - scores = bboxes[:, -1] - inds = scores > score_thr - bboxes = bboxes[inds, :] - labels = labels[inds] - if segms is not None: - segms = segms[inds, ...] - - mask_colors = [] - if labels.shape[0] > 0: - if mask_color is None: - # random color - np.random.seed(42) - mask_colors = [ - np.random.randint(0, 256, (1, 3), dtype=np.uint8) - for _ in range(max(labels) + 1) - ] - else: - # specify color - mask_colors = [ - np.array(mmcv.color_val(mask_color)[::-1], dtype=np.uint8) - ] * ( - max(labels) + 1) - - bbox_color = color_val_matplotlib(bbox_color) - text_color = color_val_matplotlib(text_color) - - img = mmcv.bgr2rgb(img) - width, height = img.shape[1], img.shape[0] - img = np.ascontiguousarray(img) - - fig = plt.figure(win_name, frameon=False) - plt.title(win_name) - canvas = fig.canvas - dpi = fig.get_dpi() - # add a small EPS to avoid precision lost due to matplotlib's truncation - # (https://github.com/matplotlib/matplotlib/issues/15363) - fig.set_size_inches((width + EPS) / dpi, (height + EPS) / dpi) - - # remove white edges by set subplot margin - plt.subplots_adjust(left=0, right=1, bottom=0, top=1) - ax = plt.gca() - ax.axis('off') - - polygons = [] - color = [] - for i, (bbox, label) in enumerate(zip(bboxes, labels)): - bbox_int = bbox.astype(np.int32) - poly = [[bbox_int[0], bbox_int[1]], [bbox_int[0], bbox_int[3]], - [bbox_int[2], bbox_int[3]], [bbox_int[2], bbox_int[1]]] - np_poly = np.array(poly).reshape((4, 2)) - polygons.append(Polygon(np_poly)) - color.append(bbox_color) - label_text = class_names[ - label] if class_names is not None else f'class {label}' - if len(bbox) > 4: - label_text += f'|{bbox[-1]:.02f}' - ax.text( - bbox_int[0], - bbox_int[1], - f'{label_text}', - bbox={ - 'facecolor': 'black', - 'alpha': 0.8, - 'pad': 0.7, - 'edgecolor': 'none' - }, - color=text_color, - fontsize=font_size, - verticalalignment='top', - horizontalalignment='left') - if segms is not None: - color_mask = mask_colors[labels[i]] - mask = segms[i].astype(bool) - img[mask] = img[mask] * 0.5 + color_mask * 0.5 - - plt.imshow(img) - - p = PatchCollection( - polygons, facecolor='none', edgecolors=color, linewidths=thickness) - ax.add_collection(p) - - stream, _ = canvas.print_to_buffer() - buffer = np.frombuffer(stream, dtype='uint8') - img_rgba = buffer.reshape(height, width, 4) - rgb, alpha = np.split(img_rgba, [3], axis=2) - img = rgb.astype('uint8') - img = mmcv.rgb2bgr(img) - - if show: - # We do not use cv2 for display because in some cases, opencv will - # conflict with Qt, it will output a warning: Current thread - # is not the object's thread. You can refer to - # https://github.com/opencv/opencv-python/issues/46 for details - if wait_time == 0: - plt.show() - else: - plt.show(block=False) - plt.pause(wait_time) - if out_file is not None: - mmcv.imwrite(img, out_file) - - plt.close() - - return img - - -def imshow_gt_det_bboxes(img, - annotation, - result, - class_names=None, - score_thr=0, - gt_bbox_color=(255, 102, 61), - gt_text_color=(255, 102, 61), - gt_mask_color=(255, 102, 61), - det_bbox_color=(72, 101, 241), - det_text_color=(72, 101, 241), - det_mask_color=(72, 101, 241), - thickness=2, - font_size=13, - win_name='', - show=True, - wait_time=0, - out_file=None): - """General visualization GT and result function. - - Args: - img (str or ndarray): The image to be displayed.) - annotation (dict): Ground truth annotations where contain keys of - 'gt_bboxes' and 'gt_labels' or 'gt_masks' - result (tuple[list] or list): The detection result, can be either - (bbox, segm) or just bbox. - class_names (list[str]): Names of each classes. - score_thr (float): Minimum score of bboxes to be shown. Default: 0 - gt_bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines. - The tuple of color should be in BGR order. Default: (255, 102, 61) - gt_text_color (str or tuple(int) or :obj:`Color`):Color of texts. - The tuple of color should be in BGR order. Default: (255, 102, 61) - gt_mask_color (str or tuple(int) or :obj:`Color`, optional): - Color of masks. The tuple of color should be in BGR order. - Default: (255, 102, 61) - det_bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines. - The tuple of color should be in BGR order. Default: (72, 101, 241) - det_text_color (str or tuple(int) or :obj:`Color`):Color of texts. - The tuple of color should be in BGR order. Default: (72, 101, 241) - det_mask_color (str or tuple(int) or :obj:`Color`, optional): - Color of masks. The tuple of color should be in BGR order. - Default: (72, 101, 241) - thickness (int): Thickness of lines. Default: 2 - font_size (int): Font size of texts. Default: 13 - win_name (str): The window name. Default: '' - show (bool): Whether to show the image. Default: True - wait_time (float): Value of waitKey param. Default: 0. - out_file (str, optional): The filename to write the image. - Default: None - - Returns: - ndarray: The image with bboxes or masks drawn on it. - """ - assert 'gt_bboxes' in annotation - assert 'gt_labels' in annotation - assert isinstance( - result, - (tuple, list)), f'Expected tuple or list, but get {type(result)}' - - gt_masks = annotation.get('gt_masks', None) - if gt_masks is not None: - gt_masks = mask2ndarray(gt_masks) - - img = mmcv.imread(img) - - img = imshow_det_bboxes( - img, - annotation['gt_bboxes'], - annotation['gt_labels'], - gt_masks, - class_names=class_names, - bbox_color=gt_bbox_color, - text_color=gt_text_color, - mask_color=gt_mask_color, - thickness=thickness, - font_size=font_size, - win_name=win_name, - show=False) - - if isinstance(result, tuple): - bbox_result, segm_result = result - if isinstance(segm_result, tuple): - segm_result = segm_result[0] # ms rcnn - else: - bbox_result, segm_result = result, None - - bboxes = np.vstack(bbox_result) - labels = [ - np.full(bbox.shape[0], i, dtype=np.int32) - for i, bbox in enumerate(bbox_result) - ] - labels = np.concatenate(labels) - - segms = None - if segm_result is not None and len(labels) > 0: # non empty - segms = mmcv.concat_list(segm_result) - segms = mask_util.decode(segms) - segms = segms.transpose(2, 0, 1) - - img = imshow_det_bboxes( - img, - bboxes, - labels, - segms=segms, - class_names=class_names, - score_thr=score_thr, - bbox_color=det_bbox_color, - text_color=det_text_color, - mask_color=det_mask_color, - thickness=thickness, - font_size=font_size, - win_name=win_name, - show=show, - wait_time=wait_time, - out_file=out_file) - return img diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r50-d8_512x512_20k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r50-d8_512x512_20k_voc12aug.py deleted file mode 100644 index 071f190261c4e8f4a80a5da12a88e0cfcdfef0d8..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r50-d8_512x512_20k_voc12aug.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = [ - '../_base_/models/ann_r50-d8.py', '../_base_/datasets/pascal_voc12_aug.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_20k.py' -] -model = dict( - decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_40k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_40k_voc12aug.py deleted file mode 100644 index 492bd3dfdce331070cb9645dbe55142e9b662da1..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_40k_voc12aug.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = [ - '../_base_/models/deeplabv3_r50-d8.py', - '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v3/lraspp_m-v3s-d8_scratch_512x1024_320k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v3/lraspp_m-v3s-d8_scratch_512x1024_320k_cityscapes.py deleted file mode 100644 index 0c5f707200c5d8b6d39493762baf59023dcaad11..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v3/lraspp_m-v3s-d8_scratch_512x1024_320k_cityscapes.py +++ /dev/null @@ -1,22 +0,0 @@ -_base_ = './lraspp_m-v3-d8_scratch_512x1024_320k_cityscapes.py' -norm_cfg = dict(type='SyncBN', eps=0.001, requires_grad=True) -model = dict( - type='EncoderDecoder', - backbone=dict( - type='MobileNetV3', - arch='small', - out_indices=(0, 1, 12), - norm_cfg=norm_cfg), - decode_head=dict( - type='LRASPPHead', - in_channels=(16, 16, 576), - in_index=(0, 1, 2), - channels=128, - input_transform='multiple_select', - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU'), - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0))) diff --git a/spaces/Artrajz/vits-simple-api/Dockerfile b/spaces/Artrajz/vits-simple-api/Dockerfile deleted file mode 100644 index f1b1f95b644347246f0925c3b882abfeeb2e31ae..0000000000000000000000000000000000000000 --- a/spaces/Artrajz/vits-simple-api/Dockerfile +++ /dev/null @@ -1,38 +0,0 @@ -FROM artrajz/pytorch:1.13.1-cpu-py3.10.11-ubuntu22.04 - -RUN mkdir -p /app -WORKDIR /app - -ENV DEBIAN_FRONTEND=noninteractive - - -RUN apt-get update && \ - apt-get install -yq build-essential espeak-ng cmake wget ca-certificates tzdata&& \ - update-ca-certificates && \ - apt-get clean && \ - apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false && \ - rm -rf /var/lib/apt/lists/* - -# Install jemalloc -RUN wget https://github.com/jemalloc/jemalloc/releases/download/5.3.0/jemalloc-5.3.0.tar.bz2 && \ - tar -xvf jemalloc-5.3.0.tar.bz2 && \ - cd jemalloc-5.3.0 && \ - ./configure && \ - make -j$(nproc) && \ - make install && \ - cd .. && \ - rm -rf jemalloc-5.3.0* && \ - ldconfig - -ENV LD_PRELOAD=/usr/local/lib/libjemalloc.so - -COPY requirements.txt /app/ -RUN pip install gunicorn --no-cache-dir && \ - pip install -r requirements.txt --no-cache-dir&& \ - rm -rf /root/.cache/pip/* - -COPY . /app - -EXPOSE 23456 - -CMD ["gunicorn", "-c", "gunicorn_config.py", "app:app"] \ No newline at end of file diff --git a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/ms_deform_attn.py b/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/ms_deform_attn.py deleted file mode 100644 index 489d501bef364020212306d81e9b85c8daa27491..0000000000000000000000000000000000000000 --- a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/ms_deform_attn.py +++ /dev/null @@ -1,413 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------------------ -# Modified from: -# https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/functions/ms_deform_attn_func.py -# https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/modules/ms_deform_attn.py -# https://github.com/open-mmlab/mmcv/blob/master/mmcv/ops/multi_scale_deform_attn.py -# ------------------------------------------------------------------------------------------------ - -import math -import warnings -from typing import Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.init import constant_, xavier_uniform_ - -try: - from groundingdino import _C -except: - warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!") - - -# helpers -def _is_power_of_2(n): - if (not isinstance(n, int)) or (n < 0): - raise ValueError("invalid input for _is_power_of_2: {} (type: {})".format(n, type(n))) - return (n & (n - 1) == 0) and n != 0 - - -class MultiScaleDeformableAttnFunction(Function): - @staticmethod - def forward( - ctx, - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - im2col_step, - ): - ctx.im2col_step = im2col_step - output = _C.ms_deform_attn_forward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ctx.im2col_step, - ) - ctx.save_for_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - ( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ) = ctx.saved_tensors - grad_value, grad_sampling_loc, grad_attn_weight = _C.ms_deform_attn_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - grad_output, - ctx.im2col_step, - ) - - return grad_value, None, None, grad_sampling_loc, grad_attn_weight, None - - -def multi_scale_deformable_attn_pytorch( - value: torch.Tensor, - value_spatial_shapes: torch.Tensor, - sampling_locations: torch.Tensor, - attention_weights: torch.Tensor, -) -> torch.Tensor: - - bs, _, num_heads, embed_dims = value.shape - _, num_queries, num_heads, num_levels, num_points, _ = sampling_locations.shape - value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], dim=1) - sampling_grids = 2 * sampling_locations - 1 - sampling_value_list = [] - for level, (H_, W_) in enumerate(value_spatial_shapes): - # bs, H_*W_, num_heads, embed_dims -> - # bs, H_*W_, num_heads*embed_dims -> - # bs, num_heads*embed_dims, H_*W_ -> - # bs*num_heads, embed_dims, H_, W_ - value_l_ = ( - value_list[level].flatten(2).transpose(1, 2).reshape(bs * num_heads, embed_dims, H_, W_) - ) - # bs, num_queries, num_heads, num_points, 2 -> - # bs, num_heads, num_queries, num_points, 2 -> - # bs*num_heads, num_queries, num_points, 2 - sampling_grid_l_ = sampling_grids[:, :, :, level].transpose(1, 2).flatten(0, 1) - # bs*num_heads, embed_dims, num_queries, num_points - sampling_value_l_ = F.grid_sample( - value_l_, sampling_grid_l_, mode="bilinear", padding_mode="zeros", align_corners=False - ) - sampling_value_list.append(sampling_value_l_) - # (bs, num_queries, num_heads, num_levels, num_points) -> - # (bs, num_heads, num_queries, num_levels, num_points) -> - # (bs, num_heads, 1, num_queries, num_levels*num_points) - attention_weights = attention_weights.transpose(1, 2).reshape( - bs * num_heads, 1, num_queries, num_levels * num_points - ) - output = ( - (torch.stack(sampling_value_list, dim=-2).flatten(-2) * attention_weights) - .sum(-1) - .view(bs, num_heads * embed_dims, num_queries) - ) - return output.transpose(1, 2).contiguous() - - -class MultiScaleDeformableAttention(nn.Module): - """Multi-Scale Deformable Attention Module used in Deformable-DETR - - `Deformable DETR: Deformable Transformers for End-to-End Object Detection. - `_. - - Args: - embed_dim (int): The embedding dimension of Attention. Default: 256. - num_heads (int): The number of attention heads. Default: 8. - num_levels (int): The number of feature map used in Attention. Default: 4. - num_points (int): The number of sampling points for each query - in each head. Default: 4. - img2col_steps (int): The step used in image_to_column. Defualt: 64. - dropout (float): Dropout layer used in output. Default: 0.1. - batch_first (bool): if ``True``, then the input and output tensor will be - provided as `(bs, n, embed_dim)`. Default: False. `(n, bs, embed_dim)` - """ - - def __init__( - self, - embed_dim: int = 256, - num_heads: int = 8, - num_levels: int = 4, - num_points: int = 4, - img2col_step: int = 64, - batch_first: bool = False, - ): - super().__init__() - if embed_dim % num_heads != 0: - raise ValueError( - "embed_dim must be divisible by num_heads, but got {} and {}".format( - embed_dim, num_heads - ) - ) - head_dim = embed_dim // num_heads - - self.batch_first = batch_first - - if not _is_power_of_2(head_dim): - warnings.warn( - """ - You'd better set d_model in MSDeformAttn to make sure that - each dim of the attention head a power of 2, which is more efficient. - """ - ) - - self.im2col_step = img2col_step - self.embed_dim = embed_dim - self.num_heads = num_heads - self.num_levels = num_levels - self.num_points = num_points - self.sampling_offsets = nn.Linear(embed_dim, num_heads * num_levels * num_points * 2) - self.attention_weights = nn.Linear(embed_dim, num_heads * num_levels * num_points) - self.value_proj = nn.Linear(embed_dim, embed_dim) - self.output_proj = nn.Linear(embed_dim, embed_dim) - - self.init_weights() - - def _reset_parameters(self): - return self.init_weights() - - def init_weights(self): - """ - Default initialization for Parameters of Module. - """ - constant_(self.sampling_offsets.weight.data, 0.0) - thetas = torch.arange(self.num_heads, dtype=torch.float32) * ( - 2.0 * math.pi / self.num_heads - ) - grid_init = torch.stack([thetas.cos(), thetas.sin()], -1) - grid_init = ( - (grid_init / grid_init.abs().max(-1, keepdim=True)[0]) - .view(self.num_heads, 1, 1, 2) - .repeat(1, self.num_levels, self.num_points, 1) - ) - for i in range(self.num_points): - grid_init[:, :, i, :] *= i + 1 - with torch.no_grad(): - self.sampling_offsets.bias = nn.Parameter(grid_init.view(-1)) - constant_(self.attention_weights.weight.data, 0.0) - constant_(self.attention_weights.bias.data, 0.0) - xavier_uniform_(self.value_proj.weight.data) - constant_(self.value_proj.bias.data, 0.0) - xavier_uniform_(self.output_proj.weight.data) - constant_(self.output_proj.bias.data, 0.0) - - def freeze_sampling_offsets(self): - print("Freeze sampling offsets") - self.sampling_offsets.weight.requires_grad = False - self.sampling_offsets.bias.requires_grad = False - - def freeze_attention_weights(self): - print("Freeze attention weights") - self.attention_weights.weight.requires_grad = False - self.attention_weights.bias.requires_grad = False - - def forward( - self, - query: torch.Tensor, - key: Optional[torch.Tensor] = None, - value: Optional[torch.Tensor] = None, - query_pos: Optional[torch.Tensor] = None, - key_padding_mask: Optional[torch.Tensor] = None, - reference_points: Optional[torch.Tensor] = None, - spatial_shapes: Optional[torch.Tensor] = None, - level_start_index: Optional[torch.Tensor] = None, - **kwargs - ) -> torch.Tensor: - - """Forward Function of MultiScaleDeformableAttention - - Args: - query (torch.Tensor): Query embeddings with shape - `(num_query, bs, embed_dim)` - key (torch.Tensor): Key embeddings with shape - `(num_key, bs, embed_dim)` - value (torch.Tensor): Value embeddings with shape - `(num_key, bs, embed_dim)` - query_pos (torch.Tensor): The position embedding for `query`. Default: None. - key_padding_mask (torch.Tensor): ByteTensor for `query`, with shape `(bs, num_key)`, - indicating which elements within `key` to be ignored in attention. - reference_points (torch.Tensor): The normalized reference points - with shape `(bs, num_query, num_levels, 2)`, - all elements is range in [0, 1], top-left (0, 0), - bottom-right (1, 1), including padding are. - or `(N, Length_{query}, num_levels, 4)`, add additional - two dimensions `(h, w)` to form reference boxes. - spatial_shapes (torch.Tensor): Spatial shape of features in different levels. - With shape `(num_levels, 2)`, last dimension represents `(h, w)`. - level_start_index (torch.Tensor): The start index of each level. A tensor with - shape `(num_levels, )` which can be represented as - `[0, h_0 * w_0, h_0 * w_0 + h_1 * w_1, ...]`. - - Returns: - torch.Tensor: forward results with shape `(num_query, bs, embed_dim)` - """ - - if value is None: - value = query - - if query_pos is not None: - query = query + query_pos - - if not self.batch_first: - # change to (bs, num_query ,embed_dims) - query = query.permute(1, 0, 2) - value = value.permute(1, 0, 2) - - bs, num_query, _ = query.shape - bs, num_value, _ = value.shape - - assert (spatial_shapes[:, 0] * spatial_shapes[:, 1]).sum() == num_value - - value = self.value_proj(value) - if key_padding_mask is not None: - value = value.masked_fill(key_padding_mask[..., None], float(0)) - value = value.view(bs, num_value, self.num_heads, -1) - sampling_offsets = self.sampling_offsets(query).view( - bs, num_query, self.num_heads, self.num_levels, self.num_points, 2 - ) - attention_weights = self.attention_weights(query).view( - bs, num_query, self.num_heads, self.num_levels * self.num_points - ) - attention_weights = attention_weights.softmax(-1) - attention_weights = attention_weights.view( - bs, - num_query, - self.num_heads, - self.num_levels, - self.num_points, - ) - - # bs, num_query, num_heads, num_levels, num_points, 2 - if reference_points.shape[-1] == 2: - offset_normalizer = torch.stack([spatial_shapes[..., 1], spatial_shapes[..., 0]], -1) - sampling_locations = ( - reference_points[:, :, None, :, None, :] - + sampling_offsets / offset_normalizer[None, None, None, :, None, :] - ) - elif reference_points.shape[-1] == 4: - sampling_locations = ( - reference_points[:, :, None, :, None, :2] - + sampling_offsets - / self.num_points - * reference_points[:, :, None, :, None, 2:] - * 0.5 - ) - else: - raise ValueError( - "Last dim of reference_points must be 2 or 4, but get {} instead.".format( - reference_points.shape[-1] - ) - ) - - if torch.cuda.is_available() and value.is_cuda: - halffloat = False - if value.dtype == torch.float16: - halffloat = True - value = value.float() - sampling_locations = sampling_locations.float() - attention_weights = attention_weights.float() - - output = MultiScaleDeformableAttnFunction.apply( - value, - spatial_shapes, - level_start_index, - sampling_locations, - attention_weights, - self.im2col_step, - ) - - if halffloat: - output = output.half() - else: - output = multi_scale_deformable_attn_pytorch( - value, spatial_shapes, sampling_locations, attention_weights - ) - - output = self.output_proj(output) - - if not self.batch_first: - output = output.permute(1, 0, 2) - - return output - - -def create_dummy_class(klass, dependency, message=""): - """ - When a dependency of a class is not available, create a dummy class which throws ImportError - when used. - - Args: - klass (str): name of the class. - dependency (str): name of the dependency. - message: extra message to print - Returns: - class: a class object - """ - err = "Cannot import '{}', therefore '{}' is not available.".format(dependency, klass) - if message: - err = err + " " + message - - class _DummyMetaClass(type): - # throw error on class attribute access - def __getattr__(_, __): # noqa: B902 - raise ImportError(err) - - class _Dummy(object, metaclass=_DummyMetaClass): - # throw error on constructor - def __init__(self, *args, **kwargs): - raise ImportError(err) - - return _Dummy - - -def create_dummy_func(func, dependency, message=""): - """ - When a dependency of a function is not available, create a dummy function which throws - ImportError when used. - - Args: - func (str): name of the function. - dependency (str or list[str]): name(s) of the dependency. - message: extra message to print - Returns: - function: a function object - """ - err = "Cannot import '{}', therefore '{}' is not available.".format(dependency, func) - if message: - err = err + " " + message - - if isinstance(dependency, (list, tuple)): - dependency = ",".join(dependency) - - def _dummy(*args, **kwargs): - raise ImportError(err) - - return _dummy diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/models/panoptic_fpn.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/models/panoptic_fpn.py deleted file mode 100644 index 88f55d2ce9db62e61445d6a3700067d9d864ecae..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/models/panoptic_fpn.py +++ /dev/null @@ -1,20 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.layers import ShapeSpec -from detectron2.modeling import PanopticFPN -from detectron2.modeling.meta_arch.semantic_seg import SemSegFPNHead - -from .mask_rcnn_fpn import model - -model._target_ = PanopticFPN -model.sem_seg_head = L(SemSegFPNHead)( - input_shape={ - f: L(ShapeSpec)(stride=s, channels="${....backbone.out_channels}") - for f, s in zip(["p2", "p3", "p4", "p5"], [4, 8, 16, 32]) - }, - ignore_value=255, - num_classes=54, # COCO stuff + 1 - conv_dims=128, - common_stride=4, - loss_weight=0.5, - norm="GN", -) diff --git a/spaces/Beasto/Image_Colorizer_Pix2Pix/README.md b/spaces/Beasto/Image_Colorizer_Pix2Pix/README.md deleted file mode 100644 index c9a39e385e1c9d5ba895067fb8791a4ab8415af4..0000000000000000000000000000000000000000 --- a/spaces/Beasto/Image_Colorizer_Pix2Pix/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image Colorizer Pix2Pix -emoji: 💻 -colorFrom: green -colorTo: red -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Benson/text-generation/Examples/Descargar Gta 5 100 Mb.md b/spaces/Benson/text-generation/Examples/Descargar Gta 5 100 Mb.md deleted file mode 100644 index ec5c3f3f53ff9001e895ffa32e9045ce615ef3f1..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Gta 5 100 Mb.md +++ /dev/null @@ -1,94 +0,0 @@ -
-

Cómo descargar GTA 5 en 100 MB

-

Grand Theft Auto V, o GTA 5 para abreviar, es uno de los videojuegos más populares y exitosos de todos los tiempos. Lanzado en 2013 por Rockstar Games, GTA 5 es un juego de acción y aventura de mundo abierto que te permite explorar la ciudad ficticia de Los Santos y sus alrededores. Puedes jugar como uno de los tres protagonistas, cada uno con su propia personalidad e historia, o unirte a otros jugadores en línea en varios modos y actividades. GTA 5 ha recibido elogios de la crítica por su jugabilidad, gráficos, historia y características en línea, y ha vendido más de 150 millones de copias en todo el mundo.

-

Sin embargo, GTA 5 es también un juego muy grande que requiere mucho espacio en tu dispositivo. Dependiendo de su plataforma, el tamaño de descarga de GTA 5 puede variar de 72 GB a más de 100 GB. Esto puede ser un problema si tiene una conexión a Internet lenta o espacio de almacenamiento limitado. Afortunadamente, hay una manera de descargar GTA 5 en un tamaño mucho más pequeño mediante el uso de archivos comprimidos. En este artículo, explicaremos qué son los archivos comprimidos, cómo pueden ayudarle a descargar GTA 5 más rápido y más fácil, y cómo encontrarlos y descargarlos de forma segura. También te daremos un breve resumen de lo que GTA 5 ofrece en términos de jugabilidad, gráficos y características.

-

descargar gta 5 100 mb


Download Zip 🆓 https://bltlly.com/2v6IKj



-

Requisitos del sistema GTA 5

-

Antes de descargar GTA 5, debe asegurarse de que su dispositivo pueda ejecutarlo sin problemas. GTA 5 está disponible en varias plataformas, incluyendo PC, PS4, PS5, Xbox One y Xbox Series X/S. Cada plataforma tiene sus propios requisitos mínimos y recomendados que debe cumplir o superar. Aquí están los requisitos del sistema para cada plataforma:

- - -Plataforma -Requisitos mínimos -Requisitos recomendados - - -PC - -- OS: Windows 10 (64-bit)
- Procesador: Intel Core i7-3770 @3.4GHz o AMD FX-8350 @4GHz
- Memoria: 16 GB RAM
- Gráficos: NVIDIA GeForce GTX 1060 o AMD Radeon RX580 (4 GB>VRAM) - -PS4 -- Consola PS4
- Almacenamiento: Al menos 100 GB - Almacenamiento: Al menos 100 GB - - -PS5 -- Consola PS5
- Almacenamiento: Al menos 72 GB - - -Xbox One -- Consola Xbox One
- Almacenamiento: Al menos 100 GB - - -Serie Xbox X/S -- Consola Xbox Series X/S
- Almacenamiento: Al menos 72 GB - -
-

GTA 5 Tamaño de descarga

-

Como puedes ver en la tabla de arriba, GTA 5 es un juego muy grande que requiere mucho espacio en tu dispositivo. El tamaño de descarga de GTA 5 varía según tu plataforma y la versión del juego que tengas. Por ejemplo, las versiones para PS4 y Xbox One de GTA 5 ocupan unos 100 GB de espacio, mientras que las versiones para PS5 y Xbox Series X/S ocupan unos 72 GB. La versión para PC de GTA 5 ocupa unos 106 GB de espacio, pero también incluye contenido adicional y funciones que no están disponibles en las consolas.

-

Si tiene una conexión rápida a Internet y suficiente espacio de almacenamiento, puede descargar GTA 5 directamente de las fuentes oficiales, como Steam, Epic Games Store, PlayStation Store o Microsoft Store. Sin embargo, si tiene una conexión a Internet lenta o espacio de almacenamiento limitado, es posible que desee considerar el uso de archivos comprimidos para descargar GTA 5 en un tamaño más pequeño.

-

GTA 5 Archivos comprimidos

-

Los archivos comprimidos son archivos que se han reducido en tamaño mediante el uso de varios algoritmos y técnicas. La compresión de archivos puede ayudar a ahorrar ancho de banda, espacio de almacenamiento y tiempo de descarga. Por ejemplo, un archivo que originalmente es de 100 MB puede ser comprimido a 10 MB, lo que significa que tomará menos tiempo para descargar y menos espacio para almacenar.

- -

Una de las formas de descargar GTA 5 en un tamaño más pequeño es utilizar archivos comprimidos que contienen los datos del juego. Estos archivos se pueden encontrar en varios sitios web y foros que ofrecen descargas de GTA 5. Sin embargo, no todos los archivos comprimidos son seguros y confiables. Algunos de ellos pueden contener malware, virus o datos corruptos que pueden dañar su dispositivo o arruinar su experiencia de juego. Por lo tanto, debe tener cuidado y precaución al descargar archivos comprimidos para GTA 5.

-

Beneficios de los archivos comprimidos

-

El uso de archivos comprimidos para descargar GTA 5 puede tener algunos beneficios, como:

-

- -

Inconvenientes de los archivos comprimidos

-

Sin embargo, usar archivos comprimidos para descargar GTA 5 también puede tener algunos inconvenientes, como:

- -

Cómo encontrar y descargar archivos comprimidos para GTA 5

Si desea descargar GTA 5 en un tamaño más pequeño utilizando archivos comprimidos, debe seguir estos pasos:

-
    -
  1. Busque fuentes confiables y seguras de archivos comprimidos para GTA 5. Puede usar motores de búsqueda, foros, blogs o redes sociales para encontrar sitios web que ofrecen descargas de GTA 5. Sin embargo, debe tener cuidado y evitar hacer clic en enlaces sospechosos o falsos que podrían llevarlo a malware o estafas. También puede utilizar herramientas o reseñas en línea para comprobar la reputación y credibilidad de los sitios web antes de descargar nada de ellos.
  2. -
  3. Seleccione el archivo comprimido que se adapte a su dispositivo y plataforma. Debe asegurarse de que el archivo comprimido que elija sea compatible con su dispositivo y plataforma. También debe verificar la relación de compresión, el tamaño de la descarga y la calidad del archivo. Puede comparar diferentes archivos y leer la descripción y los comentarios de otros usuarios para tomar una decisión informada.
  4. -
  5. Descargue el archivo comprimido en su dispositivo. Necesita tener suficiente espacio en su dispositivo para descargar el archivo comprimido. También necesita tener una conexión a Internet estable y un buen gestor de descargas o software para acelerar el proceso. También puede usar una VPN o proxy para evitar cualquier restricción o limitación que pueda afectar su descarga.
  6. - -
  7. Instalar y ejecutar GTA 5 en su dispositivo. Debe seguir los pasos de instalación e ingresar la clave de activación si es necesario para instalar GTA 5 en su dispositivo. También necesitas actualizar el juego e instalar cualquier parche o mods si es necesario. Luego, puedes lanzar GTA 5 y disfrutar jugando.
  8. -
-

GTA 5 Juego

-

Ahora que ha descargado GTA 5 en un tamaño más pequeño usando archivos comprimidos, es posible que se pregunte qué tiene GTA 5 para ofrecer en términos de jugabilidad, gráficos y características. GTA 5 es un juego que te permite experimentar la vida de crimen, aventura y diversión en un vasto y diverso mundo abierto. Estos son algunos de los aspectos del juego de GTA 5 que puedes disfrutar:

-

Modo historia

-

GTA 5 tiene un modo de historia que sigue las vidas de tres protagonistas: Michael, Franklin y Trevor. Michael es un ladrón de bancos retirado que vive una vida lujosa pero infeliz con su familia en Los Santos. Franklin es un joven y ambicioso estafador callejero que trabaja para un distribuidor de automóviles en el sur de Los Santos. Trevor es un antiguo compañero de Michael que vive una vida caótica y violenta en el condado de Blaine. Los tres personajes tienen sus propios antecedentes, personalidades, habilidades y objetivos, pero también están conectados por una serie de eventos que los obligan a trabajar juntos.

-

El modo historia de GTA 5 consiste en varias misiones que involucran robos, tiroteos, persecuciones, sigilo y más. Puedes cambiar entre los tres personajes en cualquier momento durante el juego, ya sea manualmente o automáticamente dependiendo de la situación. También puede personalizar su apariencia, habilidades, armas, vehículos y propiedades. El modo historia de GTA 5 está lleno de humor, drama, acción y sorpresas que te mantendrán enganchado hasta el final.

-

Modo en línea

- -

GTA Online ofrece muchas opciones para que te diviertas y ganes dinero en el mundo online. Puedes jugar varios modos como carreras, deathmatches, atracos, misiones, modos adversarios, etc. También puedes explorar el mapa y encontrar diversas actividades como golf, tenis, paracaidismo, caza, etc. También puedes comprar varios negocios como clubes nocturnos, bunkers, arcadas, etc. y ejecutarlos como desee. GTA Online se actualiza constantemente con nuevos contenidos y características que añaden más variedad y emoción al juego.

-

Nuevas características y actualizaciones

-

GTA 5 no es solo un juego que fue lanzado en 2013, sino también un juego que todavía está evolucionando y mejorando en 2023. Rockstar Games ha estado añadiendo nuevos contenidos y características a GTA 5 en consolas de generación actual y PC que mejoran la experiencia y la calidad del juego. Algunas de las nuevas características y actualizaciones son:

- -

Conclusión

- -

Esperamos que este artículo te haya ayudado a aprender a descargar GTA 5 en 100 MB usando archivos comprimidos. Si usted tiene alguna pregunta o retroalimentación, por favor no dude en dejar un comentario a continuación. Gracias por leer y feliz juego!

-

Preguntas frecuentes

-

Aquí hay algunas preguntas frecuentes sobre GTA 5 y sus respuestas:

-
    -
  1. Q: ¿Cuánto tiempo es el modo historia GTA 5?
    A: El modo de historia GTA 5 puede tardar entre 25 y 40 horas en completarse, dependiendo de su estilo de juego y sus opciones. Sin embargo, también hay muchas misiones secundarias, actividades y secretos que pueden extender el tiempo de juego significativamente.
  2. -
  3. P: ¿Cuántos jugadores pueden jugar GTA Online?
    A: GTA Online puede admitir hasta 30 jugadores por sesión en consolas de generación actual y PC. Sin embargo, algunos modos y actividades pueden tener diferentes límites de jugador.
  4. -
  5. P: ¿Cómo puedo jugar GTA Online con mis amigos?
    A: Puedes jugar a GTA Online con tus amigos uniéndote o creando una sesión de solo invitación, una sesión de equipo o una sesión de amigos. También puedes unirte a una fiesta o crear un grupo de chat para comunicarte con tus amigos.
  6. -
  7. P: ¿Cómo puedo ganar dinero en GTA Online?
    A: Hay muchas maneras de ganar dinero en GTA Online, como completar misiones, atracos, carreras, negocios, etc. También puedes robar tiendas, vender autos o apostar en eventos. Sin embargo, debes evitar usar trucos, hacks o fallos para ganar dinero, ya que pueden hacer que te prohíban o penalicen.
  8. -
  9. P: ¿Cómo puedo obtener GTA 5 gratis?
    A: No hay forma legal de obtener GTA 5 gratis a partir de ahora. Sin embargo, es posible que puedas obtenerlo gratis en el futuro si está disponible en plataformas como PlayStation Plus o Epic Games Store.
  10. -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_timer.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_timer.py deleted file mode 100644 index a2ca6be03c43054caaa3660998273ebf704345dd..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_timer.py +++ /dev/null @@ -1,19 +0,0 @@ -""" -Timer context manager, only used in debug. - -""" - -from time import time - -import contextlib -from typing import Generator - - -@contextlib.contextmanager -def timer(subject: str = "time") -> Generator[None, None, None]: - """print the elapsed time. (only used in debugging)""" - start = time() - yield - elapsed = time() - start - elapsed_ms = elapsed * 1000 - print(f"{subject} elapsed {elapsed_ms:.1f}ms") diff --git a/spaces/BigChungux/Pet_Survey2/README.md b/spaces/BigChungux/Pet_Survey2/README.md deleted file mode 100644 index 2cc853d6b0c57d01fc94fe2f571893a35865bc3f..0000000000000000000000000000000000000000 --- a/spaces/BigChungux/Pet_Survey2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Pet Survey2 -emoji: 🏢 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Boops88/gsdf-Counterfeit-V2.5/README.md b/spaces/Boops88/gsdf-Counterfeit-V2.5/README.md deleted file mode 100644 index b8908b2f71191a72c7b07dea566d9189d7b345a1..0000000000000000000000000000000000000000 --- a/spaces/Boops88/gsdf-Counterfeit-V2.5/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Gsdf Counterfeit V2.5 -emoji: 👁 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/BreadBytes1/CC-Dashboard/old_app.py b/spaces/BreadBytes1/CC-Dashboard/old_app.py deleted file mode 100644 index dc2708776a5c116ffbcb00c30a095768af457d1c..0000000000000000000000000000000000000000 --- a/spaces/BreadBytes1/CC-Dashboard/old_app.py +++ /dev/null @@ -1,330 +0,0 @@ -# --- -# jupyter: -# jupytext: -# text_representation: -# extension: .py -# format_name: light -# format_version: '1.5' -# jupytext_version: 1.14.2 -# kernelspec: -# display_name: Python [conda env:bbytes] * -# language: python -# name: conda-env-bbytes-py -# --- - -# + -import csv -import pandas as pd -from datetime import datetime, timedelta -import numpy as np -import datetime as dt -import matplotlib.pyplot as plt -from pathlib import Path - -import streamlit as st -import plotly.express as px -import altair as alt -import dateutil.parser -import copy - - -# + -@st.experimental_memo -def get_hist_info(df_coin, principal_balance,plheader): - numtrades = int(len(df_coin)) - numwin = int(sum(df_coin[plheader] > 0)) - numloss = int(sum(df_coin[plheader] < 0)) - winrate = int(np.round(100*numwin/numtrades,2)) - - grosswin = sum(df_coin[df_coin[plheader] > 0][plheader]) - grossloss = sum(df_coin[df_coin[plheader] < 0][plheader]) - if grossloss !=0: - pfactor = -1*np.round(grosswin/grossloss,2) - else: - pfactor = np.nan - return numtrades, numwin, numloss, winrate, pfactor -@st.experimental_memo -def get_rolling_stats(df, lev, otimeheader, days): - max_roll = (df[otimeheader].max() - df[otimeheader].min()).days - - if max_roll >= days: - rollend = df[otimeheader].max()-timedelta(days=days) - rolling_df = df[df[otimeheader] >= rollend] - - if len(rolling_df) > 0: - rolling_perc = rolling_df['Return Per Trade'].dropna().cumprod().values[-1]-1 - else: - rolling_perc = np.nan - else: - rolling_perc = np.nan - return 100*rolling_perc - -@st.experimental_memo -def filt_df(df, cheader, symbol_selections): - """ - Inputs: df (pd.DataFrame), cheader (str) and symbol_selections (list[str]). - - Returns a filtered pd.DataFrame containing only data that matches symbol_selections (list[str]) - from df[cheader]. - """ - - df = df.copy() - df = df[df[cheader].isin(symbol_selections)] - - return df - -@st.experimental_memo -def my_style(v, props=''): - props = 'color:red' if v < 0 else 'color:green' - return props - -@st.experimental_memo -def cc_coding(row): - return ['background-color: lightgrey'] * len(row) if row['Exit Date'] <= datetime.strptime('2022-12-16 00:00:00','%Y-%m-%d %H:%M:%S').date() else [''] * len(row) - - -@st.cache(ttl=24*3600, allow_output_mutation=True) -def load_data(filename, otimeheader,fmat): - df = pd.read_csv(open(filename,'r'), sep='\t') # so as not to mutate cached value - df.columns = ['Trade','Signal','Entry Date','Buy Price', 'Sell Price','Exit Date', 'P/L per token', 'P/L %', 'Drawdown %'] -# df.insert(1, 'Signal', ['Long']*len(df)) - - df['Buy Price'] = df['Buy Price'].str.replace('$', '', regex=True) - df['Sell Price'] = df['Sell Price'].str.replace('$', '', regex=True) - df['Buy Price'] = df['Buy Price'].str.replace(',', '', regex=True) - df['Sell Price'] = df['Sell Price'].str.replace(',', '', regex=True) - df['P/L per token'] = df['P/L per token'].str.replace('$', '', regex=True) - df['P/L per token'] = df['P/L per token'].str.replace(',', '', regex=True) - df['P/L %'] = df['P/L %'].str.replace('%', '', regex=True) - - df['Buy Price'] = pd.to_numeric(df['Buy Price']) - df['Sell Price'] = pd.to_numeric(df['Sell Price']) - df['P/L per token'] = pd.to_numeric(df['P/L per token']) - df['P/L %'] = pd.to_numeric(df['P/L %']) - - dateheader = 'Date' - theader = 'Time' - - df[dateheader] = [tradetimes.split(" ")[0] for tradetimes in df[otimeheader].values] - df[theader] = [tradetimes.split(" ")[1] for tradetimes in df[otimeheader].values] - - df[otimeheader]= [dateutil.parser.parse(date+' '+time) - for date,time in zip(df[dateheader],df[theader])] - - df[otimeheader] = pd.to_datetime(df[otimeheader]) - df['Exit Date'] = pd.to_datetime(df['Exit Date']) - df.sort_values(by=otimeheader, inplace=True) - - df[dateheader] = [dateutil.parser.parse(date).date() for date in df[dateheader]] - df[theader] = [dateutil.parser.parse(time).time() for time in df[theader]] - df['Trade'] = [i+1 for i in range(len(df))] #reindex - - return df - -def runapp(): - bot_selections = "Cosmic Cupcake" - otimeheader = 'Entry Date' - plheader = 'P/L %' - fmat = '%Y-%m-%d %H:%M:%S' - dollar_cap = 100000.00 - fees = .075/100 - st.header(f"{bot_selections} Performance Dashboard :bread: :moneybag:") - st.write("Welcome to the Trading Bot Dashboard by BreadBytes! You can use this dashboard to track " + - "the performance of our trading bots.") - # st.sidebar.header("FAQ") - - # with st.sidebar.subheader("FAQ"): - # st.write(Path("FAQ_README.md").read_text()) - st.subheader("Choose your settings:") - no_errors = True - - data = load_data("CC-Trade-Log.csv",otimeheader,fmat) - df = data.copy(deep=True) - - dateheader = 'Date' - theader = 'Time' - - with st.form("user input", ): - if no_errors: - with st.container(): - col1, col2 = st.columns(2) - with col1: - try: - startdate = st.date_input("Start Date", value=pd.to_datetime(df[otimeheader]).min()) - except: - st.error("Please select your exchange or upload a supported trade log file.") - no_errors = False - with col2: - try: - enddate = st.date_input("End Date", value=datetime.today()) - except: - st.error("Please select your exchange or upload a supported trade log file.") - no_errors = False - #st.sidebar.subheader("Customize your Dashboard") - - if no_errors and (enddate < startdate): - st.error("End Date must be later than Start date. Please try again.") - no_errors = False - with st.container(): - col1,col2 = st.columns(2) - with col2: - lev = st.number_input('Leverage', min_value=1, value=1, max_value= 3, step=1) - with col1: - principal_balance = st.number_input('Starting Balance', min_value=0.00, value=1000.00, max_value= dollar_cap, step=.01) - - #hack way to get button centered - c = st.columns(9) - with c[4]: - submitted = st.form_submit_button("Get Cookin'!") - - signal_map = {'Long': 1, 'Short':-1} # 1 for long #-1 for short - - df['Calculated Return %'] = df['Signal'].map(signal_map)*(1-fees)*((df['Sell Price']-df['Buy Price'])/df['Buy Price'] - fees) #accounts for fees on open and close of trade - - - if submitted and principal_balance * lev > dollar_cap: - lev = np.floor(dollar_cap/principal_balance) - st.error(f"WARNING: (Starting Balance)*(Leverage) exceeds the ${dollar_cap} limit. Using maximum available leverage of {lev}") - - if submitted and no_errors: - df = df[(df[dateheader] >= startdate) & (df[dateheader] <= enddate)] - - if len(df) == 0: - st.error("There are no available trades matching your selections. Please try again!") - no_errors = False - if no_errors: - df['Return Per Trade'] = 1+lev*df['Calculated Return %'].values - - df['Compounded Return'] = df['Return Per Trade'].cumprod() - df['New Balance'] = [min(dollar_cap/lev, bal*principal_balance) for bal in df['Compounded Return']] - df['Balance used in Trade'] = np.concatenate([[principal_balance], df['New Balance'].values[:-1]]) - df['Net P/L Per Trade'] = (df['Return Per Trade']-1)*df['Balance used in Trade'] - df['Cumulative P/L'] = df['Net P/L Per Trade'].cumsum() - cum_pl = df.loc[df.drop('Drawdown %', axis=1).dropna().index[-1],'Cumulative P/L'] + principal_balance - - effective_return = 100*((cum_pl - principal_balance)/principal_balance) - - st.header(f"{bot_selections} Results") - if len(bot_selections) > 1: - st.metric( - "Total Account Balance", - f"${cum_pl:.2f}", - f"{100*(cum_pl-principal_balance)/(principal_balance):.2f} %", - ) - - st.line_chart(data=df.drop('Drawdown %', axis=1).dropna(), x='Exit Date', y='Cumulative P/L', use_container_width=True) - - df['Per Trade Return Rate'] = df['Return Per Trade']-1 - - totals = pd.DataFrame([], columns = ['# of Trades', 'Wins', 'Losses', 'Win Rate', 'Profit Factor']) - data = get_hist_info(df.drop('Drawdown %', axis=1).dropna(), principal_balance,'Per Trade Return Rate') - totals.loc[len(totals)] = list(i for i in data) - - totals['Cum. P/L'] = cum_pl-principal_balance - totals['Cum. P/L (%)'] = 100*(cum_pl-principal_balance)/principal_balance - #results_df['Avg. P/L'] = (cum_pl-principal_balance)/results_df['# of Trades'].values[0] - #results_df['Avg. P/L (%)'] = 100*results_df['Avg. P/L'].values[0]/principal_balance - - if df.empty: - st.error("Oops! None of the data provided matches your selection(s). Please try again.") - else: - #st.dataframe(totals.style.format({'# of Trades': '{:.0f}','Wins': '{:.0f}','Losses': '{:.0f}','Win Rate': '{:.2f}%','Profit Factor' : '{:.2f}', 'Avg. P/L (%)': '{:.2f}%', 'Cum. P/L (%)': '{:.2f}%', 'Cum. P/L': '{:.2f}', 'Avg. P/L': '{:.2f}'}) - #.text_gradient(subset=['Win Rate'],cmap="RdYlGn", vmin = 0, vmax = 100)\ - #.text_gradient(subset=['Profit Factor'],cmap="RdYlGn", vmin = 0, vmax = 2), use_container_width=True) - for row in totals.itertuples(): - col1, col2, col3, col4 = st.columns(4) - c1, c2, c3, c4 = st.columns(4) - with col1: - st.metric( - "Total Trades", - f"{row._1:.0f}", - ) - with c1: - st.metric( - "Profit Factor", - f"{row._5:.2f}", - ) - with col2: - st.metric( - "Wins", - f"{row.Wins:.0f}", - ) - with c2: - st.metric( - "Cumulative P/L", - f"${row._6:.2f}", - f"{row._7:.2f} %", - ) - with col3: - st.metric( - "Losses", - f"{row.Losses:.0f}", - ) - with c3: - st.metric( - "Rolling 7 Days", - "",#f"{(1+get_rolling_stats(df,otimeheader, 30))*principal_balance:.2f}", - f"{get_rolling_stats(df,lev, otimeheader, 7):.2f}%", - ) - st.metric( - "Rolling 30 Days", - "",#f"{(1+get_rolling_stats(df,otimeheader, 30))*principal_balance:.2f}", - f"{get_rolling_stats(df,lev, otimeheader, 30):.2f}%", - ) - - with col4: - st.metric( - "Win Rate", - f"{row._4:.1f}%", - ) - with c4: - st.metric( - "Rolling 90 Days", - "",#f"{(1+get_rolling_stats(df,otimeheader, 30))*principal_balance:.2f}", - f"{get_rolling_stats(df,lev, otimeheader, 90):.2f}%", - ) - st.metric( - "Rolling 180 Days", - "",#f"{(1+get_rolling_stats(df,otimeheader, 30))*principal_balance:.2f}", - f"{get_rolling_stats(df,lev, otimeheader, 180):.2f}%", - ) - - if submitted: - grouped_df = df.groupby('Exit Date').agg({'Signal':'min','Entry Date': 'min','Exit Date': 'max','Buy Price': 'mean', - 'Sell Price' : 'max', - 'Net P/L Per Trade': 'mean', - 'Calculated Return %' : lambda x: np.round(100*lev*x.sum(),2)}) - grouped_df.index = range(1, len(grouped_df)+1) - grouped_df.rename(columns={'Buy Price':'Avg. Buy Price', - 'Net P/L Per Trade':'Net P/L', - 'Calculated Return %':'P/L %'}, inplace=True) - else: - grouped_df = df.groupby('Exit Date').agg({'Signal':'min','Entry Date': 'min','Exit Date': 'max','Buy Price': 'mean', - 'Sell Price' : 'max', - 'P/L per token' : 'mean', - 'Calculated Return %' : lambda x: np.round(100*x.sum(),2)}) - grouped_df.index = range(1, len(grouped_df)+1) - grouped_df.rename(columns={'Buy Price':'Avg. Buy Price', - 'P/L per token':'Net P/L', - 'Calculated Return %':'P/L %'}, inplace=True) - - st.subheader("Trade Logs") - grouped_df['Entry Date'] = pd.to_datetime(grouped_df['Entry Date']) - grouped_df['Exit Date'] = pd.to_datetime(grouped_df['Exit Date']) - st.dataframe(grouped_df.style.format({'Entry Date':'{:%m-%d-%Y %H:%M:%S}','Exit Date':'{:%m-%d-%Y %H:%M:%S}','Avg. Buy Price': '${:.2f}', 'Sell Price': '${:.2f}', 'Net P/L':'${:.2f}', 'P/L %':'{:.2f}%'})\ - .apply(cc_coding, axis=1)\ - .applymap(my_style,subset=['Net P/L'])\ - .applymap(my_style,subset=['P/L %'])\ - ,use_container_width=True) - new_title = '
           Backtest Data
' - st.markdown(new_title, unsafe_allow_html=True) - -if __name__ == "__main__": - st.set_page_config( - "Trading Bot Dashboard", - layout="wide", - ) - runapp() -# - - - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/scan.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/scan.h deleted file mode 100644 index 32a05a5a6bd3a5be92bbd84c1bf4edb9e929abeb..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/scan.h +++ /dev/null @@ -1,64 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file scan.h - * \brief TBB implementations of scan functions. - */ - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace tbb -{ -namespace detail -{ - -template - OutputIterator inclusive_scan(tag, - InputIterator first, - InputIterator last, - OutputIterator result, - BinaryFunction binary_op); - - -template - OutputIterator exclusive_scan(tag, - InputIterator first, - InputIterator last, - OutputIterator result, - T init, - BinaryFunction binary_op); - - -} // end namespace detail -} // end namespace tbb -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/Chomkwoy/Nilkessye/utils/hangul.py b/spaces/Chomkwoy/Nilkessye/utils/hangul.py deleted file mode 100644 index 82f09fe3994308491f0e40c2129abf3e6ba570dc..0000000000000000000000000000000000000000 --- a/spaces/Chomkwoy/Nilkessye/utils/hangul.py +++ /dev/null @@ -1,269 +0,0 @@ -import copy -import unicodedata - -YALE_TO_HANGUL_INITIAL_CONSONANTS = { - 'k': '\u1100', - 'kk': '\u1101', - 'n': '\u1102', - 'nn': '\u1114', - 't': '\u1103', - 'tt': '\u1104', - 'l': '\u1105', - 'm': '\u1106', - 'p': '\u1107', - 'pp': '\u1108', - 's': '\u1109', - 'ss': '\u110a', - 'G': '\u110B', - 'GG': '\u1147', - 'c': '\u110C', - 'cc': '\u110D', - 'ch': '\u110e', - 'kh': '\u110f', - 'th': '\u1110', - 'ph': '\u1111', - 'h': '\u1112', - - 'pk': '\u111e', - 'pt': '\u1120', - 'ps': '\u1121', - 'psk': '\u1122', - 'pst': '\u1123', - 'pc': '\u1127', - 'pth': '\u1129', - 'W': '\u112b', - 'sk': '\u112d', - 'sn': '\u112e', - 'st': '\u112f', - 'sp': '\u1132', - 'sc': '\u1136', - 'sh': '\u113b', - 'z': '\u1140', - 'hh': '\u1158', - 'q': '\u1159', - - 'ng': '\u114c', - - '': '\u115f', -} - -YALE_TO_HANGUL_FINAL_CONSONANTS = { - 'k': '\u11a8', - 'ks': '\u11aa', - 'n': '\u11ab', - 't': '\u11ae', - 'l': '\u11af', - 'lk': '\u11b0', - 'lm': '\u11b1', - 'lp': '\u11b2', - 'ls': '\u11b3', - 'm': '\u11b7', - 'p': '\u11b8', - 'ps': '\u11b9', - 's': '\u11ba', - 'G': '\u11bc', - 'nt': '\u11c6', - 'ns': '\u11c7', - 'nz': '\u11c8', - 'lks': '\u11cc', - 'lz': '\u11d7', - 'lq': '\u11d9', - 'mp': '\u11dc', - 'ms': '\u11dd', - 'mz': '\u11df', - 'M': '\u11e2', - 'W': '\u11e6', - 'z': '\u11eb', - 'ng': '\u11f0', - 'q': '\u11f9', - 'ngs': '\u11f1', - '': '' -} - -YALE_TO_HANGUL_VOWELS = { - 'a': '\u1161', - 'ay': '\u1162', - 'ya': '\u1163', - 'yay': '\u1164', - 'e': '\u1165', - 'ey': '\u1166', - 'ye': '\u1167', - 'yey': '\u1168', - 'wo': '\u1169', - 'wa': '\u116a', - 'way': '\u116b', - 'woy': '\u116c', - 'yo': '\u116d', - 'wu': '\u116e', - 'we': '\u116f', - 'wey': '\u1170', - 'wuy': '\u1171', - 'yu': '\u1172', - 'u': '\u1173', - 'uy': '\u1174', - 'i': '\u1175', - - 'o': '\u119e', - 'oy': '\u11a1', - 'yoy': '\u1188', - 'yuy': '\u1194', - 'ywe': '\u1191', - 'ywey': '\u1192', - 'ywa': '\u1184', - 'yway': '\u1185' -} - -UNICODE_COMPATIBILITY_FORMS = { - 'ᄀ': 'ㄱ', - 'ᄁ': 'ㄲ', - 'ᆪ': 'ㄳ', - 'ᄂ': 'ㄴ', - 'ᆬ': 'ㄵ', - 'ᆭ': 'ㄶ', - 'ᄃ': 'ㄷ', - 'ᄄ': 'ㄸ', - 'ᄅ': 'ㄹ', - 'ᆰ': 'ㄺ', - 'ᆱ': 'ㄻ', - 'ᆲ': 'ㄼ', - 'ᆳ': 'ㄽ', - 'ᆴ': 'ㄾ', - 'ᆵ': 'ㄿ', - 'ᄚ': 'ㅀ', - 'ᄆ': 'ㅁ', - 'ᄇ': 'ㅂ', - 'ᄈ': 'ㅃ', - 'ᄡ': 'ㅄ', - 'ᄉ': 'ㅅ', - 'ᄊ': 'ㅆ', - 'ᄋ': 'ㅇ', - 'ᄌ': 'ㅈ', - 'ᄍ': 'ㅉ', - 'ᄎ': 'ㅊ', - 'ᄏ': 'ㅋ', - 'ᄐ': 'ㅌ', - 'ᄑ': 'ㅍ', - 'ᄒ': 'ㅎ', - 'ᅡ': 'ㅏ', - 'ᅢ': 'ㅐ', - 'ᅣ': 'ㅑ', - 'ᅤ': 'ㅒ', - 'ᅥ': 'ㅓ', - 'ᅦ': 'ㅔ', - 'ᅧ': 'ㅕ', - 'ᅨ': 'ㅖ', - 'ᅩ': 'ㅗ', - 'ᅪ': 'ㅘ', - 'ᅫ': 'ㅙ', - 'ᅬ': 'ㅚ', - 'ᅭ': 'ㅛ', - 'ᅮ': 'ㅜ', - 'ᅯ': 'ㅝ', - 'ᅰ': 'ㅞ', - 'ᅱ': 'ㅟ', - 'ᅲ': 'ㅠ', - 'ᅳ': 'ㅡ', - 'ᅴ': 'ㅢ', - 'ᅵ': 'ㅣ', - 'ᄔ': 'ㅥ', - 'ᄕ': 'ㅦ', - 'ᇇ': 'ㅧ', - 'ᇈ': 'ㅨ', - 'ᇌ': 'ㅩ', - 'ᇎ': 'ㅪ', - 'ᇓ': 'ㅫ', - 'ᇗ': 'ㅬ', - 'ᇙ': 'ㅭ', - 'ᄜ': 'ㅮ', - 'ᇝ': 'ㅯ', - 'ᇟ': 'ㅰ', - 'ᄝ': 'ㅱ', - 'ᄞ': 'ㅲ', - 'ᄠ': 'ㅳ', - 'ᄢ': 'ㅴ', - 'ᄣ': 'ㅵ', - 'ᄧ': 'ㅶ', - 'ᄩ': 'ㅷ', - 'ᄫ': 'ㅸ', - 'ᄬ': 'ㅹ', - 'ᄭ': 'ㅺ', - 'ᄮ': 'ㅻ', - 'ᄯ': 'ㅼ', - 'ᄲ': 'ㅽ', - 'ᄶ': 'ㅾ', - 'ᅀ': 'ㅿ', - 'ᅇ': 'ㆀ', - 'ᅌ': 'ㆁ', - 'ᇱ': 'ㆂ', - 'ᇲ': 'ㆃ', - 'ᅗ': 'ㆄ', - 'ᅘ': 'ㆅ', - 'ᅙ': 'ㆆ', - 'ᆄ': 'ㆇ', - 'ᆅ': 'ㆈ', - 'ᆈ': 'ㆉ', - 'ᆑ': 'ㆊ', - 'ᆒ': 'ㆋ', - 'ᆔ': 'ㆌ', - 'ᆞ': 'ㆍ', - 'ᆡ': 'ㆎ', -} - - -def convert_yale_to_hangul(yale): - syllables = yale.split('.') - - result = "" - for syllable in syllables: - out_syll = "" - tone_mark = "" - - orig_syllable = copy.copy(syllable) - - if any(syllable.endswith(t) for t in ['L', "H", "R"]): - tone_mark = { - 'L': '', - 'H': '\u302e', - 'R': '\u302f' - }[syllable[-1]] - syllable = syllable[:-1] - - initial_exists = False - for n in range(4, -1, -1): - if syllable[:n] in YALE_TO_HANGUL_INITIAL_CONSONANTS: - out_syll += YALE_TO_HANGUL_INITIAL_CONSONANTS[syllable[:n]] - syllable = syllable[n:] - initial_exists = (n > 0) - break - - vowel_exists = False - for n in range(4, 0, -1): - if syllable[:n] in YALE_TO_HANGUL_VOWELS: - out_syll += YALE_TO_HANGUL_VOWELS[syllable[:n]] - syllable = syllable[n:] - vowel_exists = True - break - - for n in range(4, 0, -1): - if syllable[:n] in YALE_TO_HANGUL_FINAL_CONSONANTS: - out_syll += YALE_TO_HANGUL_FINAL_CONSONANTS[syllable[:n]] - syllable = syllable[n:] - break - - out_syll += tone_mark - - if initial_exists and not vowel_exists and tone_mark == "": - if out_syll in UNICODE_COMPATIBILITY_FORMS: - out_syll = UNICODE_COMPATIBILITY_FORMS[out_syll] - - if not initial_exists and vowel_exists and tone_mark == "": - if out_syll[1:] in UNICODE_COMPATIBILITY_FORMS: - out_syll = UNICODE_COMPATIBILITY_FORMS[out_syll[1:]] - - if len(syllable) > 0: - # Failed to convert - out_syll = orig_syllable - - result += out_syll - - return result diff --git a/spaces/Chris4K/llms_compare/101 Trucos Baraja Svengali Pdf ((HOT)) Free.md b/spaces/Chris4K/llms_compare/101 Trucos Baraja Svengali Pdf ((HOT)) Free.md deleted file mode 100644 index a2583fac6bee0e9adb32dbcd2a70fca40c910029..0000000000000000000000000000000000000000 --- a/spaces/Chris4K/llms_compare/101 Trucos Baraja Svengali Pdf ((HOT)) Free.md +++ /dev/null @@ -1,98 +0,0 @@ -## 101 Trucos Baraja Svengali Pdf Free - - - - - - ![101 Trucos Baraja Svengali Pdf ((HOT)) Free](https://www.clarinetinstitute.com/uploads/6/1/3/3/61330883/s559042415394373090_p113_i10_w359.jpeg) - - - - - -**DOWNLOAD ••• [https://urluso.com/2tBNBt](https://urluso.com/2tBNBt)** - - - - - - - - - - - - - -# How to Perform Amazing Magic Tricks with a Svengali Deck - - - -A Svengali deck is a special type of playing cards that allows you to perform amazing magic tricks with ease. The deck consists of 52 cards, but half of them are identical and the other half are different. The identical cards are slightly shorter than the different ones, and they are arranged in a way that you can control which card appears on top or bottom of the deck. - - - -In this article, we will show you how to perform 101 amazing magic tricks with a Svengali deck. You will learn how to make cards change, disappear, reappear, jump, fly, and more. You will also learn how to use the deck for mind reading, predictions, and mentalism effects. You will be able to amaze your friends and family with your incredible skills and creativity. - - - -Before we start, you will need to get a Svengali deck. You can buy one online or at a magic shop, or you can make your own by cutting one card shorter than the rest and gluing it to another card of the same value. You can also download a free pdf of 101 Trucos Baraja Svengali Pdf Free[^1^], a book written by Lisa L. Hayes that teaches you how to perform amazing magic tricks with a Svengali deck. - - - -## Trick #1: The Basic Force - - - -The basic force is the most important technique you need to master when using a Svengali deck. It allows you to make any spectator choose the card you want them to choose. Here is how it works: - - - -1. Hold the deck in your left hand with your thumb on top and your fingers on the bottom. - -2. Riffle the cards from the back with your right thumb, making sure that you stop at one of the different cards. - -3. Ask the spectator to say "stop" whenever they want. - -4. When they say "stop", lift up all the cards above your right thumb and show them the bottom card of that packet. This will be one of the identical cards. - -5. Remember this card and put it back on top of the deck. - - - -You have now forced the spectator to choose the card you wanted them to choose. You can use this technique for many tricks, such as revealing their card in a surprising way or making it match your prediction. - - - -## Trick #2: The Card Change - - - -This trick will make it seem like you can change one card into another with a snap of your fingers. Here is how it works: - - - -1. Force a card on the spectator using the basic force technique. - -2. Show them their card and ask them to remember it. - -3. Put their card on top of the deck and cut it in half. - -4. Hold the top half of the deck in your right hand and show them the bottom card of that packet. This will be a different card. - -5. Say that you will change their card into this card with a snap of your fingers. - -6. Snap your fingers and turn over the top card of the bottom half of the deck. This will be one of the identical cards, matching their original card. - -7. Show them that their card has changed into this card and act surprised. - - - -You have now made it seem like you can change one card into another with a snap of your fingers. You can use this technique for many tricks, such as changing their card into a joker or a blank card. - - 145887f19f - - - - - diff --git a/spaces/Cicooo/vits-uma-genshin-honkai/Docker/Dockerfile b/spaces/Cicooo/vits-uma-genshin-honkai/Docker/Dockerfile deleted file mode 100644 index 4d39cdf02a2ec151686cc1d61234bf723068fed8..0000000000000000000000000000000000000000 --- a/spaces/Cicooo/vits-uma-genshin-honkai/Docker/Dockerfile +++ /dev/null @@ -1,12 +0,0 @@ -FROM python:3.9-bullseye -VOLUME ["/app"] -WORKDIR /app -# Set apt to Chinese mirror -RUN sed -i 's/deb.debian.org/mirrors.ustc.edu.cn/g' /etc/apt/sources.list -RUN apt-get update && apt-get -y install cmake git -RUN git clone https://huggingface.co/spaces/ikechan8370/vits-uma-genshin-honkai -WORKDIR /app/vits-uma-genshin-honkai -RUN sed -i "s/\.launch()/\.launch(server_name=\"0.0.0.0\")/" /app/vits-uma-genshin-honkai/app.py -ADD vits.sh /app/vits.sh -EXPOSE 7860 -ENTRYPOINT [ "/app/vits.sh" ] \ No newline at end of file diff --git a/spaces/CofAI/chat.b4/g4f/__init__.py b/spaces/CofAI/chat.b4/g4f/__init__.py deleted file mode 100644 index a0b4bac6aa4de9c0449095a3874c2cb9716169d7..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat.b4/g4f/__init__.py +++ /dev/null @@ -1,39 +0,0 @@ -import sys -from . import Provider -from g4f.models import Model, ModelUtils - - -class ChatCompletion: - @staticmethod - def create(model: Model.model or str, messages: list, provider: Provider.Provider = None, stream: bool = False, auth: str = False, **kwargs): - kwargs['auth'] = auth - - if provider and provider.needs_auth and not auth: - print( - f'ValueError: {provider.__name__} requires authentication (use auth="cookie or token or jwt ..." param)', file=sys.stderr) - sys.exit(1) - - try: - if isinstance(model, str): - try: - model = ModelUtils.convert[model] - except KeyError: - raise Exception(f'The model: {model} does not exist') - - engine = model.best_provider if not provider else provider - - if not engine.supports_stream and stream == True: - print( - f"ValueError: {engine.__name__} does not support 'stream' argument", file=sys.stderr) - sys.exit(1) - - print(f'Using {engine.__name__} provider') - - return (engine._create_completion(model.name, messages, stream, **kwargs) - if stream else ''.join(engine._create_completion(model.name, messages, stream, **kwargs))) - except TypeError as e: - print(e) - arg: str = str(e).split("'")[1] - print( - f"ValueError: {engine.__name__} does not support '{arg}' argument", file=sys.stderr) - sys.exit(1) diff --git a/spaces/CoreyMorris/MMLU-by-task-Leaderboard/app.py b/spaces/CoreyMorris/MMLU-by-task-Leaderboard/app.py deleted file mode 100644 index 1a7e52d886a4cc080e9c4f069386f799da1d9ae1..0000000000000000000000000000000000000000 --- a/spaces/CoreyMorris/MMLU-by-task-Leaderboard/app.py +++ /dev/null @@ -1,371 +0,0 @@ -import streamlit as st -import pandas as pd -import plotly.express as px -import matplotlib.pyplot as plt -import numpy as np -import plotly.graph_objects as go - -st.set_page_config(layout="wide") - -def load_csv_data(file_path): - return pd.read_csv(file_path) - - - - - -def plot_top_n(df, target_column, n=10): - top_n = df.nlargest(n, target_column) - - # Initialize the bar plot - fig, ax1 = plt.subplots(figsize=(10, 5)) - - # Set width for each bar and their positions - width = 0.28 - ind = np.arange(len(top_n)) - - # Plot target_column and MMLU_average on the primary y-axis with adjusted positions - ax1.bar(ind - width, top_n[target_column], width=width, color='blue', label=target_column) - ax1.bar(ind, top_n['MMLU_average'], width=width, color='orange', label='MMLU_average') - - # Set the primary y-axis labels and title - ax1.set_title(f'Top {n} performing models on {target_column}') - ax1.set_xlabel('Model') - ax1.set_ylabel('Score') - - # Create a secondary y-axis for Parameters - ax2 = ax1.twinx() - - # Plot Parameters as bars on the secondary y-axis with adjusted position - ax2.bar(ind + width, top_n['Parameters'], width=width, color='red', label='Parameters') - - # Set the secondary y-axis labels - ax2.set_ylabel('Parameters', color='red') - ax2.tick_params(axis='y', labelcolor='red') - - # Set the x-ticks and their labels - ax1.set_xticks(ind) - ax1.set_xticklabels(top_n.index, rotation=45, ha="right") - - # Adjust the legend - fig.tight_layout() - fig.legend(loc='center left', bbox_to_anchor=(1, 0.5)) - - # Show the plot - st.pyplot(fig) - -# Function to create an unfilled radar chart -def create_radar_chart_unfilled(df, model_names, metrics): - fig = go.Figure() - min_value = df.loc[model_names, metrics].min().min() - max_value = df.loc[model_names, metrics].max().max() - for model_name in model_names: - values_model = df.loc[model_name, metrics] - fig.add_trace(go.Scatterpolar( - r=values_model, - theta=metrics, - name=model_name - )) - - fig.update_layout( - polar=dict( - radialaxis=dict( - visible=True, - range=[min_value, max_value] - )), - showlegend=True, - width=800, # Change the width as needed - height=600 # Change the height as needed - ) - return fig - - - -# Function to create a line chart -def create_line_chart(df, model_names, metrics): - line_data = [] - for model_name in model_names: - values_model = df.loc[model_name, metrics] - for metric, value in zip(metrics, values_model): - line_data.append({'Model': model_name, 'Metric': metric, 'Value': value}) - - line_df = pd.DataFrame(line_data) - - fig = px.line(line_df, x='Metric', y='Value', color='Model', title='Comparison of Models', line_dash_sequence=['solid']) - fig.update_layout(showlegend=True) - return fig - -def find_top_differences_table(df, target_model, closest_models, num_differences=10, exclude_columns=['Parameters']): - # Calculate the absolute differences for each task between the target model and the closest models - new_df = df.drop(columns=exclude_columns) - differences = new_df.loc[closest_models].sub(new_df.loc[target_model]).abs() - # Unstack the differences and sort by the largest absolute difference - top_differences = differences.unstack().nlargest(num_differences) - # Convert the top differences to a DataFrame for display - top_differences_table = pd.DataFrame({ - 'Task': [idx[0] for idx in top_differences.index], - 'Difference': top_differences.values - }) - # Ensure that only unique tasks are returned - unique_top_differences_tasks = list(set(top_differences_table['Task'].tolist())) - return top_differences_table, unique_top_differences_tasks - -# st.title('Model Evaluation Results including MMLU by task') -st.title('Interactive Portal for Analyzing Open Source Large Language Models') -st.markdown("""***Last updated October 6th***""") -st.markdown("""**Models that are suspected to have training data contaminated with evaluation data have been removed.**""") -st.markdown(""" - This page provides a way to explore the results for individual tasks and compare models across tasks. Data for the benchmarks hellaswag, arc_challenge, and truthfulQA have also been included for comparison. - There are 57 tasks in the MMLU evaluation that cover a wide variety of subjects including Science, Math, Humanities, Social Science, Applied Science, Logic, and Security. - [Preliminary analysis of MMLU-by-Task data](https://coreymorrisdata.medium.com/preliminary-analysis-of-mmlu-evaluation-data-insights-from-500-open-source-models-e67885aa364b) - """) - -# Load the data into memory -data_path = "processed_data_2023-10-08.csv" -data_df = load_csv_data(data_path) -# drop the column Unnamed: 0 -data_df.rename(columns={'Unnamed: 0': "Model Name"}, inplace=True) -data_df.set_index("Model Name", inplace=True) - -filtered_data = data_df - -# sort the table by the MMLU_average column -filtered_data = filtered_data.sort_values(by=['MMLU_average'], ascending=False) - -# Select box for filtering by Parameters -parameter_threshold = st.selectbox( - 'Filter by Parameters (Less Than or Equal To):', - options=[3, 7, 13, 35, 'No threshold'], - index=4, # Set the default selected option to 'No threshold' - format_func=lambda x: f"{x}" if isinstance(x, int) else x -) -if isinstance(parameter_threshold, int): - filtered_data = filtered_data[filtered_data['Parameters'] <= parameter_threshold] - -# model name filtering -search_queries = st.text_input("Filter by Model Name:", "").replace(" ", "").split(',') -if search_queries: - filtered_data = filtered_data[filtered_data.index.str.contains('|'.join(search_queries), case=False)] - -# column name filtering -column_search_query = st.text_input("Filter by Column/Task Name:", "").replace(" ", "").split(',') -matching_columns = [col for col in filtered_data.columns if any(query.lower() in col.lower() for query in column_search_query)] -filtered_data = filtered_data[matching_columns] - - -# Display the DataFrame with only the matching columns -st.markdown("## Sortable Results") -st.dataframe( - filtered_data[matching_columns], - column_config={ - "URL": st.column_config.LinkColumn( # Only current way to make url a clickable link with streamlit without removing the interactivity of the table - width="small" - ) - }, - hide_index=True, -) - -# CSV download -filtered_data.index.name = "Model Name" - -csv = filtered_data.to_csv(index=True) -st.download_button( - label="Download data as CSV", - data=csv, - file_name="model_evaluation_results.csv", - mime="text/csv", -) - - -def create_plot(df, x_values, y_values, models=None, title=None): - if models is not None: - df = df[df.index.isin(models)] - - # remove rows with NaN values - df = df.dropna(subset=[x_values, y_values]) - - #remove label rows URL, full_model_name - df = df.drop(columns=['URL', 'full_model_name']) - - plot_data = pd.DataFrame({ - 'Model': df.index, - x_values: df[x_values], - y_values: df[y_values], - }) - - plot_data['color'] = 'purple' - fig = px.scatter(plot_data, x=x_values, y=y_values, color='color', hover_data=['Model'], trendline="ols") - - # If title is not provided, use x_values vs. y_values as the default title - if title is None: - title = x_values + " vs. " + y_values - - layout_args = dict( - showlegend=False, - xaxis_title=x_values, - yaxis_title=y_values, - xaxis=dict(), - yaxis=dict(), - title=title, - height=500, - width=1000, - ) - fig.update_layout(**layout_args) - - # Add a dashed line at 0.25 for the y_values - x_min = df[x_values].min() - x_max = df[x_values].max() - - y_min = df[y_values].min() - y_max = df[y_values].max() - - if x_values.startswith('MMLU'): - fig.add_shape( - type='line', - x0=0.25, x1=0.25, - y0=y_min, y1=y_max, - line=dict( - color='red', - width=2, - dash='dash' - ) - ) - - if y_values.startswith('MMLU'): - fig.add_shape( - type='line', - x0=x_min, x1=x_max, - y0=0.25, y1=0.25, - line=dict( - color='red', - width=2, - dash='dash' - ) - ) - - return fig - - -# Custom scatter plots -st.header('Custom scatter plots') -st.write(""" - The scatter plot is useful to identify models that outperform or underperform on a particular task in relation to their size or overall performance. - Identifying these models is a first step to better understand what training strategies result in better performance on a particular task. - """) -st.markdown("***The dashed red line indicates random chance accuracy of 0.25 as the MMLU evaluation is multiple choice with 4 response options.***") -# add a line separating the writing -st.markdown("***") -st.write("As expected, there is a strong positive relationship between the number of parameters and average performance on the MMLU evaluation.") - - -column_list_for_plotting = filtered_data.columns.tolist() -if 'URL' in column_list_for_plotting: - column_list_for_plotting.remove('URL') -if 'full_model_name' in column_list_for_plotting: - column_list_for_plotting.remove('full_model_name') - -selected_x_column = st.selectbox('Select x-axis', column_list_for_plotting, index=0) -selected_y_column = st.selectbox('Select y-axis', column_list_for_plotting, index=1) - -if selected_x_column != selected_y_column: # Avoid creating a plot with the same column on both axes - fig = create_plot(filtered_data, selected_x_column, selected_y_column) - st.plotly_chart(fig) -else: - st.write("Please select different columns for the x and y axes.") - - -# end of custom scatter plots - - - -# # Section to select a model and display radar and line charts -# st.header("Compare a Selected Model to the 5 Models Closest in MMLU Average Performance") -# st.write(""" -# This comparison highlights the nuances in model performance across different tasks. -# While the overall MMLU average score provides a general understanding of a model's capabilities, -# examining the closest models reveals variations in performance on individual tasks. -# Such an analysis can uncover specific strengths and weaknesses and guide further exploration and improvement. -# """) - -# default_model_name = "GPT-JT-6B-v0" - -# default_model_index = filtered_data.index.tolist().index(default_model_name) if default_model_name in filtered_data.index else 0 -# selected_model_name = st.selectbox("Select a Model:", filtered_data.index.tolist(), index=default_model_index) - -# # Get the closest 5 models with unique indices -# closest_models_diffs = filtered_data['MMLU_average'].sub(filtered_data.loc[selected_model_name, 'MMLU_average']).abs() -# closest_models = closest_models_diffs.nsmallest(5, keep='first').index.drop_duplicates().tolist() - - -# Find the top 10 tasks with the largest differences and convert to a DataFrame -# top_differences_table, top_differences_tasks = find_top_differences_table(filtered_data, selected_model_name, closest_models) - -# Display the DataFrame for the closest models and the top differences tasks -# st.dataframe(filtered_data.loc[closest_models, top_differences_tasks]) - -# # Display the table in the Streamlit app -# st.markdown("## Top Differences") -# st.dataframe(top_differences_table) - -# Create a radar chart for the tasks with the largest differences -# fig_radar_top_differences = create_radar_chart_unfilled(filtered_data, closest_models, top_differences_tasks) - -# Display the radar chart -# st.plotly_chart(fig_radar_top_differences) - - -st.markdown("## Notable findings and plots") - -# Moral scenarios plots -st.markdown("### MMLU’s Moral Scenarios Benchmark Doesn’t Measure What You Think it Measures") -def show_random_moral_scenarios_question(): - moral_scenarios_data = pd.read_csv('moral_scenarios_questions.csv') - random_question = moral_scenarios_data.sample() - expander = st.expander("Show a random moral scenarios question") - expander.write(random_question['query'].values[0]) - - - -st.write(""" - After a deeper dive into the moral scenarios task, it appears that benchmark is not a valid measurement of moral judgement. - The challenges these models face are not rooted in understanding each scenario, but rather in the structure of the task itself. - I would recommend using a different benchmark for moral judgement. More details of the analysis can be found here: [MMLU’s Moral Scenarios Benchmark Doesn’t Measure What You Think it Measures ](https://medium.com/p/74fd6e512521) - """) - -show_random_moral_scenarios_question() - -fig = create_plot(filtered_data, 'Parameters', 'MMLU_moral_scenarios', title="Impact of Parameter Count on Accuracy for Moral Scenarios") -st.plotly_chart(fig) -st.write() - - - -fig = create_plot(filtered_data, 'MMLU_average', 'MMLU_moral_scenarios') -st.plotly_chart(fig) - -st.markdown('### Abstract Algebra Performance') -st.write("Small models showed surprisingly strong performance on the abstract algebra task. A 6 Billion parameter model is tied for the best performance on this task and there are a number of other small models in the top 10.") -plot_top_n(filtered_data, 'MMLU_abstract_algebra', 10) - -fig = create_plot(filtered_data, 'Parameters', 'MMLU_abstract_algebra') -st.plotly_chart(fig) - -st.markdown("***Thank you to hugging face for running the evaluations and supplying the data as well as the original authors of the evaluations.***") - -st.markdown(""" -# Citation - -1. Corey Morris (2023). *Exploring the Characteristics of Large Language Models: An Interactive Portal for Analyzing 700+ Open Source Models Across 57 Diverse Evaluation Tasks*. [link](https://huggingface.co/spaces/CoreyMorris/MMLU-by-task-Leaderboard) - -2. Edward Beeching, Clémentine Fourrier, Nathan Habib, Sheon Han, Nathan Lambert, Nazneen Rajani, Omar Sanseviero, Lewis Tunstall, Thomas Wolf. (2023). *Open LLM Leaderboard*. Hugging Face. [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) - -3. Gao, Leo et al. (2021). *A framework for few-shot language model evaluation*. Zenodo. [link](https://doi.org/10.5281/zenodo.5371628) - -4. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, Oyvind Tafjord. (2018). *Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge*. arXiv. [link](https://arxiv.org/abs/1803.05457) - -5. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, Yejin Choi. (2019). *HellaSwag: Can a Machine Really Finish Your Sentence?*. arXiv. [link](https://arxiv.org/abs/1905.07830) - -6. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, Jacob Steinhardt. (2021). *Measuring Massive Multitask Language Understanding*. arXiv. [link](https://arxiv.org/abs/2009.03300) - -7. Stephanie Lin, Jacob Hilton, Owain Evans. (2022). *TruthfulQA: Measuring How Models Mimic Human Falsehoods*. arXiv. [link](https://arxiv.org/abs/2109.07958) -""") diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/c2_model_loading.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/c2_model_loading.py deleted file mode 100644 index 041d7e0141d52c2b6390d13a437062477b493fd5..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/c2_model_loading.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import logging -import pickle -from collections import OrderedDict - -import torch - -from maskrcnn_benchmark.utils.model_serialization import load_state_dict -from maskrcnn_benchmark.utils.registry import Registry - - -def _rename_basic_resnet_weights(layer_keys): - layer_keys = [k.replace("_", ".") for k in layer_keys] - layer_keys = [k.replace(".w", ".weight") for k in layer_keys] - layer_keys = [k.replace(".bn", "_bn") for k in layer_keys] - layer_keys = [k.replace(".b", ".bias") for k in layer_keys] - layer_keys = [k.replace("_bn.s", "_bn.scale") for k in layer_keys] - layer_keys = [k.replace(".biasranch", ".branch") for k in layer_keys] - layer_keys = [k.replace("bbox.pred", "bbox_pred") for k in layer_keys] - layer_keys = [k.replace("cls.score", "cls_score") for k in layer_keys] - layer_keys = [k.replace("res.conv1_", "conv1_") for k in layer_keys] - - # RPN / Faster RCNN - layer_keys = [k.replace(".biasbox", ".bbox") for k in layer_keys] - layer_keys = [k.replace("conv.rpn", "rpn.conv") for k in layer_keys] - layer_keys = [k.replace("rpn.bbox.pred", "rpn.bbox_pred") for k in layer_keys] - layer_keys = [k.replace("rpn.cls.logits", "rpn.cls_logits") for k in layer_keys] - - # Affine-Channel -> BatchNorm enaming - layer_keys = [k.replace("_bn.scale", "_bn.weight") for k in layer_keys] - - # Make torchvision-compatible - layer_keys = [k.replace("conv1_bn.", "bn1.") for k in layer_keys] - - layer_keys = [k.replace("res2.", "layer1.") for k in layer_keys] - layer_keys = [k.replace("res3.", "layer2.") for k in layer_keys] - layer_keys = [k.replace("res4.", "layer3.") for k in layer_keys] - layer_keys = [k.replace("res5.", "layer4.") for k in layer_keys] - - layer_keys = [k.replace(".branch2a.", ".conv1.") for k in layer_keys] - layer_keys = [k.replace(".branch2a_bn.", ".bn1.") for k in layer_keys] - layer_keys = [k.replace(".branch2b.", ".conv2.") for k in layer_keys] - layer_keys = [k.replace(".branch2b_bn.", ".bn2.") for k in layer_keys] - layer_keys = [k.replace(".branch2c.", ".conv3.") for k in layer_keys] - layer_keys = [k.replace(".branch2c_bn.", ".bn3.") for k in layer_keys] - - layer_keys = [k.replace(".branch1.", ".downsample.0.") for k in layer_keys] - layer_keys = [k.replace(".branch1_bn.", ".downsample.1.") for k in layer_keys] - - # GroupNorm - layer_keys = [k.replace("conv1.gn.s", "bn1.weight") for k in layer_keys] - layer_keys = [k.replace("conv1.gn.bias", "bn1.bias") for k in layer_keys] - layer_keys = [k.replace("conv2.gn.s", "bn2.weight") for k in layer_keys] - layer_keys = [k.replace("conv2.gn.bias", "bn2.bias") for k in layer_keys] - layer_keys = [k.replace("conv3.gn.s", "bn3.weight") for k in layer_keys] - layer_keys = [k.replace("conv3.gn.bias", "bn3.bias") for k in layer_keys] - layer_keys = [k.replace("downsample.0.gn.s", "downsample.1.weight") \ - for k in layer_keys] - layer_keys = [k.replace("downsample.0.gn.bias", "downsample.1.bias") \ - for k in layer_keys] - - return layer_keys - -def _rename_fpn_weights(layer_keys, stage_names): - for mapped_idx, stage_name in enumerate(stage_names, 1): - suffix = "" - if mapped_idx < 4: - suffix = ".lateral" - layer_keys = [ - k.replace("fpn.inner.layer{}.sum{}".format(stage_name, suffix), "fpn_inner{}".format(mapped_idx)) for k in layer_keys - ] - layer_keys = [k.replace("fpn.layer{}.sum".format(stage_name), "fpn_layer{}".format(mapped_idx)) for k in layer_keys] - - - layer_keys = [k.replace("rpn.conv.fpn2", "rpn.conv") for k in layer_keys] - layer_keys = [k.replace("rpn.bbox_pred.fpn2", "rpn.bbox_pred") for k in layer_keys] - layer_keys = [ - k.replace("rpn.cls_logits.fpn2", "rpn.cls_logits") for k in layer_keys - ] - - return layer_keys - - -def _rename_weights_for_resnet(weights, stage_names): - original_keys = sorted(weights.keys()) - layer_keys = sorted(weights.keys()) - - # for X-101, rename output to fc1000 to avoid conflicts afterwards - layer_keys = [k if k != "pred_b" else "fc1000_b" for k in layer_keys] - layer_keys = [k if k != "pred_w" else "fc1000_w" for k in layer_keys] - - # performs basic renaming: _ -> . , etc - layer_keys = _rename_basic_resnet_weights(layer_keys) - - # FPN - layer_keys = _rename_fpn_weights(layer_keys, stage_names) - - # Mask R-CNN - layer_keys = [k.replace("mask.fcn.logits", "mask_fcn_logits") for k in layer_keys] - layer_keys = [k.replace(".[mask].fcn", "mask_fcn") for k in layer_keys] - layer_keys = [k.replace("conv5.mask", "conv5_mask") for k in layer_keys] - - # Keypoint R-CNN - layer_keys = [k.replace("kps.score.lowres", "kps_score_lowres") for k in layer_keys] - layer_keys = [k.replace("kps.score", "kps_score") for k in layer_keys] - layer_keys = [k.replace("conv.fcn", "conv_fcn") for k in layer_keys] - - # Rename for our RPN structure - layer_keys = [k.replace("rpn.", "rpn.head.") for k in layer_keys] - - key_map = {k: v for k, v in zip(original_keys, layer_keys)} - - logger = logging.getLogger(__name__) - logger.info("Remapping C2 weights") - max_c2_key_size = max([len(k) for k in original_keys if "_momentum" not in k]) - - new_weights = OrderedDict() - for k in original_keys: - v = weights[k] - if "_momentum" in k: - continue - # if 'fc1000' in k: - # continue - w = torch.from_numpy(v) - # if "bn" in k: - # w = w.view(1, -1, 1, 1) - logger.info("C2 name: {: <{}} mapped name: {}".format(k, max_c2_key_size, key_map[k])) - new_weights[key_map[k]] = w - - return new_weights - - -def _load_c2_pickled_weights(file_path): - with open(file_path, "rb") as f: - if torch._six.PY3: - data = pickle.load(f, encoding="latin1") - else: - data = pickle.load(f) - if "blobs" in data: - weights = data["blobs"] - else: - weights = data - return weights - - -_C2_STAGE_NAMES = { - "R-50": ["1.2", "2.3", "3.5", "4.2"], - "R-101": ["1.2", "2.3", "3.22", "4.2"], - "R-152": ["1.2", "2.7", "3.35", "4.2"], -} - -C2_FORMAT_LOADER = Registry() - - -@C2_FORMAT_LOADER.register("R-50-C4") -@C2_FORMAT_LOADER.register("R-50-C5") -@C2_FORMAT_LOADER.register("R-101-C4") -@C2_FORMAT_LOADER.register("R-101-C5") -@C2_FORMAT_LOADER.register("R-50-FPN") -@C2_FORMAT_LOADER.register("R-50-FPN-RETINANET") -@C2_FORMAT_LOADER.register("R-101-FPN") -@C2_FORMAT_LOADER.register("R-101-PAN") -@C2_FORMAT_LOADER.register("R-101-FPN-RETINANET") -@C2_FORMAT_LOADER.register("R-152-FPN") -@C2_FORMAT_LOADER.register("R-152-PAN") -def load_resnet_c2_format(cfg, f): - state_dict = _load_c2_pickled_weights(f) - conv_body = cfg.MODEL.BACKBONE.CONV_BODY - arch = conv_body.replace("-C4", "").replace("-C5", "").replace("-FPN", "") - arch = arch.replace("-RETINANET", "").replace("-PAN", "") - stages = _C2_STAGE_NAMES[arch] - state_dict = _rename_weights_for_resnet(state_dict, stages) - return dict(model=state_dict) - - -def load_c2_format(cfg, f): - return C2_FORMAT_LOADER[cfg.MODEL.BACKBONE.CONV_BODY](cfg, f) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_backends/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_backends/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-3370be2a.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-3370be2a.js deleted file mode 100644 index 727cf49fce2eb4e98d5f5c9f68aac2dcde37f774..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-3370be2a.js +++ /dev/null @@ -1,16 +0,0 @@ -(function(){const t=document.createElement("link").relList;if(t&&t.supports&&t.supports("modulepreload"))return;for(const i of document.querySelectorAll('link[rel="modulepreload"]'))n(i);new MutationObserver(i=>{for(const o of i)if(o.type==="childList")for(const s of o.addedNodes)s.tagName==="LINK"&&s.rel==="modulepreload"&&n(s)}).observe(document,{childList:!0,subtree:!0});function r(i){const o={};return i.integrity&&(o.integrity=i.integrity),i.referrerPolicy&&(o.referrerPolicy=i.referrerPolicy),i.crossOrigin==="use-credentials"?o.credentials="include":i.crossOrigin==="anonymous"?o.credentials="omit":o.credentials="same-origin",o}function n(i){if(i.ep)return;i.ep=!0;const o=r(i);fetch(i.href,o)}})();var ei=typeof globalThis<"u"?globalThis:typeof window<"u"?window:typeof global<"u"?global:typeof self<"u"?self:{};function kr(e){return e&&e.__esModule&&Object.prototype.hasOwnProperty.call(e,"default")?e.default:e}var Ye={},Pe={},_t={exports:{}},R=String,tr=function(){return{isColorSupported:!1,reset:R,bold:R,dim:R,italic:R,underline:R,inverse:R,hidden:R,strikethrough:R,black:R,red:R,green:R,yellow:R,blue:R,magenta:R,cyan:R,white:R,gray:R,bgBlack:R,bgRed:R,bgGreen:R,bgYellow:R,bgBlue:R,bgMagenta:R,bgCyan:R,bgWhite:R}};_t.exports=tr();_t.exports.createColors=tr;var zr=_t.exports;Object.defineProperty(Pe,"__esModule",{value:!0});Pe.dim=Ar;Pe.default=void 0;var le=xr(zr);function xr(e){return e&&e.__esModule?e:{default:e}}let kt=new Set;function rt(e,t,r){typeof process<"u"&&{}.JEST_WORKER_ID||r&&kt.has(r)||(r&&kt.add(r),console.warn(""),t.forEach(n=>console.warn(e,"-",n)))}function Ar(e){return le.default.dim(e)}var Er={info(e,t){rt(le.default.bold(le.default.cyan("info")),...Array.isArray(e)?[e]:[t,e])},warn(e,t){rt(le.default.bold(le.default.yellow("warn")),...Array.isArray(e)?[e]:[t,e])},risk(e,t){rt(le.default.bold(le.default.magenta("risk")),...Array.isArray(e)?[e]:[t,e])}};Pe.default=Er;Object.defineProperty(Ye,"__esModule",{value:!0});Ye.default=void 0;var Sr=Nr(Pe);function Nr(e){return e&&e.__esModule?e:{default:e}}function Ne({version:e,from:t,to:r}){Sr.default.warn(`${t}-color-renamed`,[`As of Tailwind CSS ${e}, \`${t}\` has been renamed to \`${r}\`.`,"Update your configuration file to silence this warning."])}var qr={inherit:"inherit",current:"currentColor",transparent:"transparent",black:"#000",white:"#fff",slate:{50:"#f8fafc",100:"#f1f5f9",200:"#e2e8f0",300:"#cbd5e1",400:"#94a3b8",500:"#64748b",600:"#475569",700:"#334155",800:"#1e293b",900:"#0f172a"},gray:{50:"#f9fafb",100:"#f3f4f6",200:"#e5e7eb",300:"#d1d5db",400:"#9ca3af",500:"#6b7280",600:"#4b5563",700:"#374151",800:"#1f2937",900:"#111827"},zinc:{50:"#fafafa",100:"#f4f4f5",200:"#e4e4e7",300:"#d4d4d8",400:"#a1a1aa",500:"#71717a",600:"#52525b",700:"#3f3f46",800:"#27272a",900:"#18181b"},neutral:{50:"#fafafa",100:"#f5f5f5",200:"#e5e5e5",300:"#d4d4d4",400:"#a3a3a3",500:"#737373",600:"#525252",700:"#404040",800:"#262626",900:"#171717"},stone:{50:"#fafaf9",100:"#f5f5f4",200:"#e7e5e4",300:"#d6d3d1",400:"#a8a29e",500:"#78716c",600:"#57534e",700:"#44403c",800:"#292524",900:"#1c1917"},red:{50:"#fef2f2",100:"#fee2e2",200:"#fecaca",300:"#fca5a5",400:"#f87171",500:"#ef4444",600:"#dc2626",700:"#b91c1c",800:"#991b1b",900:"#7f1d1d"},orange:{50:"#fff7ed",100:"#ffedd5",200:"#fed7aa",300:"#fdba74",400:"#fb923c",500:"#f97316",600:"#ea580c",700:"#c2410c",800:"#9a3412",900:"#7c2d12"},amber:{50:"#fffbeb",100:"#fef3c7",200:"#fde68a",300:"#fcd34d",400:"#fbbf24",500:"#f59e0b",600:"#d97706",700:"#b45309",800:"#92400e",900:"#78350f"},yellow:{50:"#fefce8",100:"#fef9c3",200:"#fef08a",300:"#fde047",400:"#facc15",500:"#eab308",600:"#ca8a04",700:"#a16207",800:"#854d0e",900:"#713f12"},lime:{50:"#f7fee7",100:"#ecfccb",200:"#d9f99d",300:"#bef264",400:"#a3e635",500:"#84cc16",600:"#65a30d",700:"#4d7c0f",800:"#3f6212",900:"#365314"},green:{50:"#f0fdf4",100:"#dcfce7",200:"#bbf7d0",300:"#86efac",400:"#4ade80",500:"#22c55e",600:"#16a34a",700:"#15803d",800:"#166534",900:"#14532d"},emerald:{50:"#ecfdf5",100:"#d1fae5",200:"#a7f3d0",300:"#6ee7b7",400:"#34d399",500:"#10b981",600:"#059669",700:"#047857",800:"#065f46",900:"#064e3b"},teal:{50:"#f0fdfa",100:"#ccfbf1",200:"#99f6e4",300:"#5eead4",400:"#2dd4bf",500:"#14b8a6",600:"#0d9488",700:"#0f766e",800:"#115e59",900:"#134e4a"},cyan:{50:"#ecfeff",100:"#cffafe",200:"#a5f3fc",300:"#67e8f9",400:"#22d3ee",500:"#06b6d4",600:"#0891b2",700:"#0e7490",800:"#155e75",900:"#164e63"},sky:{50:"#f0f9ff",100:"#e0f2fe",200:"#bae6fd",300:"#7dd3fc",400:"#38bdf8",500:"#0ea5e9",600:"#0284c7",700:"#0369a1",800:"#075985",900:"#0c4a6e"},blue:{50:"#eff6ff",100:"#dbeafe",200:"#bfdbfe",300:"#93c5fd",400:"#60a5fa",500:"#3b82f6",600:"#2563eb",700:"#1d4ed8",800:"#1e40af",900:"#1e3a8a"},indigo:{50:"#eef2ff",100:"#e0e7ff",200:"#c7d2fe",300:"#a5b4fc",400:"#818cf8",500:"#6366f1",600:"#4f46e5",700:"#4338ca",800:"#3730a3",900:"#312e81"},violet:{50:"#f5f3ff",100:"#ede9fe",200:"#ddd6fe",300:"#c4b5fd",400:"#a78bfa",500:"#8b5cf6",600:"#7c3aed",700:"#6d28d9",800:"#5b21b6",900:"#4c1d95"},purple:{50:"#faf5ff",100:"#f3e8ff",200:"#e9d5ff",300:"#d8b4fe",400:"#c084fc",500:"#a855f7",600:"#9333ea",700:"#7e22ce",800:"#6b21a8",900:"#581c87"},fuchsia:{50:"#fdf4ff",100:"#fae8ff",200:"#f5d0fe",300:"#f0abfc",400:"#e879f9",500:"#d946ef",600:"#c026d3",700:"#a21caf",800:"#86198f",900:"#701a75"},pink:{50:"#fdf2f8",100:"#fce7f3",200:"#fbcfe8",300:"#f9a8d4",400:"#f472b6",500:"#ec4899",600:"#db2777",700:"#be185d",800:"#9d174d",900:"#831843"},rose:{50:"#fff1f2",100:"#ffe4e6",200:"#fecdd3",300:"#fda4af",400:"#fb7185",500:"#f43f5e",600:"#e11d48",700:"#be123c",800:"#9f1239",900:"#881337"},get lightBlue(){return Ne({version:"v2.2",from:"lightBlue",to:"sky"}),this.sky},get warmGray(){return Ne({version:"v3.0",from:"warmGray",to:"stone"}),this.stone},get trueGray(){return Ne({version:"v3.0",from:"trueGray",to:"neutral"}),this.neutral},get coolGray(){return Ne({version:"v3.0",from:"coolGray",to:"gray"}),this.gray},get blueGray(){return Ne({version:"v3.0",from:"blueGray",to:"slate"}),this.slate}};Ye.default=qr;let nt=Ye;var Cr=(nt.__esModule?nt:{default:nt}).default;const zt=kr(Cr),ti=["red","green","blue","yellow","purple","teal","orange","cyan","lime","pink"],Lr=[{color:"red",primary:600,secondary:100},{color:"green",primary:600,secondary:100},{color:"blue",primary:600,secondary:100},{color:"yellow",primary:500,secondary:100},{color:"purple",primary:600,secondary:100},{color:"teal",primary:600,secondary:100},{color:"orange",primary:600,secondary:100},{color:"cyan",primary:600,secondary:100},{color:"lime",primary:500,secondary:100},{color:"pink",primary:600,secondary:100}],ri=Lr.reduce((e,{color:t,primary:r,secondary:n})=>({...e,[t]:{primary:zt[t][r],secondary:zt[t][n]}}),{}),Mr="modulepreload",Or=function(e,t){return new URL(e,t).href},xt={},Ge=function(t,r,n){if(!r||r.length===0)return t();const i=document.getElementsByTagName("link");return Promise.all(r.map(o=>{if(o=Or(o,n),o in xt)return;xt[o]=!0;const s=o.endsWith(".css"),a=s?'[rel="stylesheet"]':"";if(!!n)for(let f=i.length-1;f>=0;f--){const u=i[f];if(u.href===o&&(!s||u.rel==="stylesheet"))return}else if(document.querySelector(`link[href="${o}"]${a}`))return;const l=document.createElement("link");if(l.rel=s?"stylesheet":Mr,s||(l.as="script",l.crossOrigin=""),l.href=o,document.head.appendChild(l),s)return new Promise((f,u)=>{l.addEventListener("load",f),l.addEventListener("error",()=>u(new Error(`Unable to preload CSS for ${o}`)))})})).then(()=>t())};var it=new Intl.Collator(0,{numeric:1}).compare;function At(e,t,r){return e=e.split("."),t=t.split("."),it(e[0],t[0])||it(e[1],t[1])||(t[2]=t.slice(2).join("."),r=/[.-]/.test(e[2]=e.slice(2).join(".")),r==/[.-]/.test(t[2])?it(e[2],t[2]):r?-1:1)}function ot(e){if(e.startsWith("http")){const{protocol:t,host:r}=new URL(e);return r.endsWith("hf.space")?{ws_protocol:"wss",host:r,http_protocol:t}:{ws_protocol:t==="https:"?"wss":"ws",http_protocol:t,host:r}}return{ws_protocol:"wss",http_protocol:"https:",host:e}}const rr=/^[^\/]*\/[^\/]*$/,Pr=/.*hf\.space\/{0,1}$/;async function Tr(e,t){const r={};t&&(r.Authorization=`Bearer ${t}`);const n=e.trim();if(rr.test(n))try{const i=await fetch(`https://huggingface.co/api/spaces/${n}/host`,{headers:r});if(i.status!==200)throw new Error("Space metadata could not be loaded.");const o=(await i.json()).host;return{space_id:e,...ot(o)}}catch(i){throw new Error("Space metadata could not be loaded."+i.message)}if(Pr.test(n)){const{ws_protocol:i,http_protocol:o,host:s}=ot(n);return{space_id:s.replace(".hf.space",""),ws_protocol:i,http_protocol:o,host:s}}return{space_id:!1,...ot(n)}}function Br(e){let t={};return e.forEach(({api_name:r},n)=>{r&&(t[r]=n)}),t}const Fr=/^(?=[^]*\b[dD]iscussions{0,1}\b)(?=[^]*\b[dD]isabled\b)[^]*$/;async function Et(e){try{const r=(await fetch(`https://huggingface.co/api/spaces/${e}/discussions`,{method:"HEAD"})).headers.get("x-error-message");return!(r&&Fr.test(r))}catch{return!1}}const Rr="This application is too busy. Keep trying!",Re="Connection errored out.";let nr;function Dr(e){return{post_data:t,upload_files:r,client:n,handle_blob:i};async function t(o,s,a){const c={"Content-Type":"application/json"};a&&(c.Authorization=`Bearer ${a}`);try{var l=await e(o,{method:"POST",body:JSON.stringify(s),headers:c})}catch{return[{error:Re},500]}return[await l.json(),l.status]}async function r(o,s,a){const c={};a&&(c.Authorization=`Bearer ${a}`);const l=new FormData;s.forEach(g=>{l.append("files",g)});try{var f=await e(`${o}/upload`,{method:"POST",body:l,headers:c})}catch{return{error:Re}}return{files:await f.json()}}async function n(o,s={normalise_files:!0}){return new Promise(async a=>{const{status_callback:c,hf_token:l,normalise_files:f}=s,u={predict:M,submit:U,view_api:ee},g=f??!0;if(typeof window>"u"||!("WebSocket"in window)){const C=await Ge(()=>import("./wrapper-6f348d45-38be7a64.js"),["./wrapper-6f348d45-38be7a64.js","./__vite-browser-external-b25bb000.js"],import.meta.url);nr=(await Ge(()=>import("./__vite-browser-external-b25bb000.js"),[],import.meta.url)).Blob,global.WebSocket=C.WebSocket}const{ws_protocol:h,http_protocol:d,host:w,space_id:b}=await Tr(o,l),z=Math.random().toString(36).substring(2),N={};let p,x={},E=!1;l&&b&&(E=await jr(b,l));async function T(C){p=C,x=Br(C?.dependencies||[]);try{A=await ee(p)}catch(D){console.error(`Could not get api details: ${D.message}`)}return{config:p,...u}}let A;async function L(C){if(c&&c(C),C.status==="running")try{p=await Mt(e,`${d}//${w}`,l);const D=await T(p);a(D)}catch(D){console.error(D),c&&c({status:"error",message:"Could not load this space.",load_status:"error",detail:"NOT_FOUND"})}}try{p=await Mt(e,`${d}//${w}`,l);const C=await T(p);a(C)}catch(C){console.error(C),b?ft(b,rr.test(b)?"space_name":"subdomain",L):c&&c({status:"error",message:"Could not load this space.",load_status:"error",detail:"NOT_FOUND"})}function M(C,D,te){let q=!1,W=!1;return new Promise((j,I)=>{const ie=U(C,D,te);ie.on("data",Z=>{q=!0,W&&ie.destroy(),j(Z)}).on("status",Z=>{Z.stage==="error"&&I(Z),Z.stage==="complete"&&q&&ie.destroy(),Z.stage==="complete"&&(W=!0)})})}function U(C,D,te){let q,W;if(typeof C=="number")q=C,W=A.unnamed_endpoints[q];else{const G=C.replace(/^\//,"");q=x[G],W=A.named_endpoints[C.trim()]}if(typeof q!="number")throw new Error("There is no endpoint matching that name of fn_index matching that number.");let j;const I=typeof C=="number"?"/predict":C;let ie,Z=!1;const y={};i(`${d}//${w+p.path}`,D,W,l).then(G=>{if(ie={data:G||[],event_data:te,fn_index:q},Gr(q,p))X({type:"status",endpoint:I,stage:"pending",queue:!1,fn_index:q,time:new Date}),t(`${d}//${w+p.path}/run${I.startsWith("/")?I:`/${I}`}`,{...ie,session_hash:z},l).then(([m,F])=>{const P=g?qt(m.data,W,p.root,p.root_url):m.data;F==200?(X({type:"data",endpoint:I,fn_index:q,data:P,time:new Date}),X({type:"status",endpoint:I,fn_index:q,stage:"complete",eta:m.average_duration,queue:!1,time:new Date})):X({type:"status",stage:"error",endpoint:I,fn_index:q,message:m.error,queue:!1,time:new Date})}).catch(m=>{X({type:"status",stage:"error",message:m.message,endpoint:I,fn_index:q,queue:!1,time:new Date})});else{X({type:"status",stage:"pending",queue:!0,endpoint:I,fn_index:q,time:new Date});let m=new URL(`${h}://${w}${p.path} - /queue/join`);E&&m.searchParams.set("__sign",E),j=new WebSocket(m),j.onclose=F=>{F.wasClean||X({type:"status",stage:"error",broken:!0,message:Re,queue:!0,endpoint:I,fn_index:q,time:new Date})},j.onmessage=function(F){const P=JSON.parse(F.data),{type:Y,status:se,data:Se}=Vr(P,N[q]);if(Y==="update"&&se&&!Z)X({type:"status",endpoint:I,fn_index:q,time:new Date,...se}),se.stage==="error"&&j.close();else if(Y==="hash"){j.send(JSON.stringify({fn_index:q,session_hash:z}));return}else Y==="data"?j.send(JSON.stringify({...ie,session_hash:z})):Y==="complete"?Z=se:Y==="log"?X({type:"log",log:Se.log,level:Se.level,endpoint:I,fn_index:q}):Y==="generating"&&X({type:"status",time:new Date,...se,stage:se?.stage,queue:!0,endpoint:I,fn_index:q});Se&&(X({type:"data",time:new Date,data:g?qt(Se.data,W,p.root,p.root_url):Se.data,endpoint:I,fn_index:q}),Z&&(X({type:"status",time:new Date,...Z,stage:se?.stage,queue:!0,endpoint:I,fn_index:q}),j.close()))},At(p.version||"2.0.0","3.6")<0&&addEventListener("open",()=>j.send(JSON.stringify({hash:z})))}});function X(G){const F=y[G.type]||[];F?.forEach(P=>P(G))}function xe(G,m){const F=y,P=F[G]||[];return F[G]=P,P?.push(m),{on:xe,off:ge,cancel:Ae,destroy:Ee}}function ge(G,m){const F=y;let P=F[G]||[];return P=P?.filter(Y=>Y!==m),F[G]=P,{on:xe,off:ge,cancel:Ae,destroy:Ee}}async function Ae(){const G={stage:"complete",queue:!1,time:new Date};Z=G,X({...G,type:"status",endpoint:I,fn_index:q}),j&&j.readyState===0?j.addEventListener("open",()=>{j.close()}):j.close();try{await e(`${d}//${w+p.path}/reset`,{headers:{"Content-Type":"application/json"},method:"POST",body:JSON.stringify({fn_index:q,session_hash:z})})}catch{console.warn("The `/reset` endpoint could not be called. Subsequent endpoint results may be unreliable.")}}function Ee(){for(const G in y)y[G].forEach(m=>{ge(G,m)})}return{on:xe,off:ge,cancel:Ae,destroy:Ee}}async function ee(C){if(A)return A;const D={"Content-Type":"application/json"};l&&(D.Authorization=`Bearer ${l}`);let te;if(At(C.version||"2.0.0","3.30")<0?te=await e("https://gradio-space-api-fetcher-v2.hf.space/api",{method:"POST",body:JSON.stringify({serialize:!1,config:JSON.stringify(C)}),headers:D}):te=await e(`${C.root}/info`,{headers:D}),!te.ok)throw new Error(Re);let q=await te.json();return"api"in q&&(q=q.api),q.named_endpoints["/predict"]&&!q.unnamed_endpoints[0]&&(q.unnamed_endpoints[0]=q.named_endpoints["/predict"]),Ir(q,C,x)}})}async function i(o,s,a,c){const l=await ct(s,void 0,[],!0,a);return Promise.all(l.map(async({path:f,blob:u,data:g,type:h})=>{if(u){const d=(await r(o,[u],c)).files[0];return{path:f,file_url:d,type:h}}else return{path:f,base64:g,type:h}})).then(f=>(f.forEach(({path:u,file_url:g,base64:h,type:d})=>{if(h)st(s,h,u);else if(d==="Gallery")st(s,g,u);else if(g){const w={is_file:!0,name:`${g}`,data:null};st(s,w,u)}}),s))}}const{post_data:ni,upload_files:St,client:Nt,handle_blob:ii}=Dr(fetch);function qt(e,t,r,n){return e.map((i,o)=>{var s,a,c,l;return((a=(s=t.returns)==null?void 0:s[o])==null?void 0:a.component)==="File"?Ce(i,r,n):((l=(c=t.returns)==null?void 0:c[o])==null?void 0:l.component)==="Gallery"?i.map(f=>Array.isArray(f)?[Ce(f[0],r,n),f[1]]:[Ce(f,r,n),null]):typeof i=="object"&&i.is_file?Ce(i,r,n):i})}function Ce(e,t,r){if(e==null)return null;if(typeof e=="string")return{name:"file_data",data:e};if(Array.isArray(e)){const n=[];for(const i of e)i===null?n.push(null):n.push(Ce(i,t,r));return n}else e.is_file&&(r?e.data="/proxy="+r+"file="+e.name:e.data=t+"/file="+e.name);return e}function Ct(e,t,r,n){switch(e.type){case"string":return"string";case"boolean":return"boolean";case"number":return"number"}if(r==="JSONSerializable"||r==="StringSerializable")return"any";if(r==="ListStringSerializable")return"string[]";if(t==="Image")return n==="parameter"?"Blob | File | Buffer":"string";if(r==="FileSerializable")return e?.type==="array"?n==="parameter"?"(Blob | File | Buffer)[]":"{ name: string; data: string; size?: number; is_file?: boolean; orig_name?: string}[]":n==="parameter"?"Blob | File | Buffer":"{ name: string; data: string; size?: number; is_file?: boolean; orig_name?: string}";if(r==="GallerySerializable")return n==="parameter"?"[(Blob | File | Buffer), (string | null)][]":"[{ name: string; data: string; size?: number; is_file?: boolean; orig_name?: string}, (string | null))][]"}function Lt(e,t){return t==="GallerySerializable"?"array of [file, label] tuples":t==="ListStringSerializable"?"array of strings":t==="FileSerializable"?"array of files or single file":e.description}function Ir(e,t,r){const n={named_endpoints:{},unnamed_endpoints:{}};for(const i in e){const o=e[i];for(const s in o){const a=t.dependencies[s]?s:r[s.replace("/","")],c=o[s];n[i][s]={},n[i][s].parameters={},n[i][s].returns={},n[i][s].type=t.dependencies[a].types,n[i][s].parameters=c.parameters.map(({label:l,component:f,type:u,serializer:g})=>({label:l,component:f,type:Ct(u,f,g,"parameter"),description:Lt(u,g)})),n[i][s].returns=c.returns.map(({label:l,component:f,type:u,serializer:g})=>({label:l,component:f,type:Ct(u,f,g,"return"),description:Lt(u,g)}))}}return n}async function jr(e,t){try{return(await(await fetch(`https://huggingface.co/api/spaces/${e}/jwt`,{headers:{Authorization:`Bearer ${t}`}})).json()).token||!1}catch(r){return console.error(r),!1}}function st(e,t,r){for(;r.length>1;)e=e[r.shift()];e[r.shift()]=t}async function ct(e,t=void 0,r=[],n=!1,i=void 0){if(Array.isArray(e)){let o=[];return await Promise.all(e.map(async(s,a)=>{var c;let l=r.slice();l.push(a);const f=await ct(e[a],n?((c=i?.parameters[a])==null?void 0:c.component)||void 0:t,l,!1,i);o=o.concat(f)})),o}else if(globalThis.Buffer&&e instanceof globalThis.Buffer){const o=t==="Image";return[{path:r,blob:o?!1:new nr([e]),data:o?`${e.toString("base64")}`:!1,type:t}]}else if(e instanceof Blob||typeof window<"u"&&e instanceof File)if(t==="Image"){let o;if(typeof window<"u")o=await Ur(e);else{const s=await e.arrayBuffer();o=Buffer.from(s).toString("base64")}return[{path:r,data:o,type:t}]}else return[{path:r,blob:e,type:t}];else if(typeof e=="object"){let o=[];for(let s in e)if(e.hasOwnProperty(s)){let a=r.slice();a.push(s),o=o.concat(await ct(e[s],void 0,a,!1,i))}return o}else return[]}function Ur(e){return new Promise((t,r)=>{const n=new FileReader;n.onloadend=()=>t(n.result),n.readAsDataURL(e)})}function Gr(e,t){var r,n,i,o;return!(((n=(r=t?.dependencies)==null?void 0:r[e])==null?void 0:n.queue)===null?t.enable_queue:(o=(i=t?.dependencies)==null?void 0:i[e])!=null&&o.queue)||!1}async function Mt(e,t,r){const n={};if(r&&(n.Authorization=`Bearer ${r}`),typeof window<"u"&&window.gradio_config&&location.origin!=="http://localhost:9876"){const i=window.gradio_config.root,o=window.gradio_config;return o.root=t+o.root,{...o,path:i}}else if(t){let i=await e(`${t}/config`,{headers:n});if(i.status===200){const o=await i.json();return o.path=o.path??"",o.root=t,o}else throw new Error("Could not get config.")}throw new Error("No config or app endpoint found")}async function ft(e,t,r){let n=t==="subdomain"?`https://huggingface.co/api/spaces/by-subdomain/${e}`:`https://huggingface.co/api/spaces/${e}`,i,o;try{if(i=await fetch(n),o=i.status,o!==200)throw new Error;i=await i.json()}catch{r({status:"error",load_status:"error",message:"Could not get space status",detail:"NOT_FOUND"});return}if(!i||o!==200)return;const{runtime:{stage:s},id:a}=i;switch(s){case"STOPPED":case"SLEEPING":r({status:"sleeping",load_status:"pending",message:"Space is asleep. Waking it up...",detail:s}),setTimeout(()=>{ft(e,t,r)},1e3);break;case"PAUSED":r({status:"paused",load_status:"error",message:"This space has been paused by the author. If you would like to try this demo, consider duplicating the space.",detail:s,discussions_enabled:await Et(a)});break;case"RUNNING":case"RUNNING_BUILDING":r({status:"running",load_status:"complete",message:"",detail:s});break;case"BUILDING":r({status:"building",load_status:"pending",message:"Space is building...",detail:s}),setTimeout(()=>{ft(e,t,r)},1e3);break;default:r({status:"space_error",load_status:"error",message:"This space is experiencing an issue.",detail:s,discussions_enabled:await Et(a)});break}}function Vr(e,t){switch(e.msg){case"send_data":return{type:"data"};case"send_hash":return{type:"hash"};case"queue_full":return{type:"update",status:{queue:!0,message:Rr,stage:"error",code:e.code,success:e.success}};case"estimation":return{type:"update",status:{queue:!0,stage:t||"pending",code:e.code,size:e.queue_size,position:e.rank,eta:e.rank_eta,success:e.success}};case"progress":return{type:"update",status:{queue:!0,stage:"pending",code:e.code,progress_data:e.progress_data,success:e.success}};case"log":return{type:"log",data:e};case"process_generating":return{type:"generating",status:{queue:!0,message:e.success?null:e.output.error,stage:e.success?"generating":"error",code:e.code,progress_data:e.progress_data,eta:e.average_duration},data:e.success?e.output:null};case"process_completed":return"error"in e.output?{type:"update",status:{queue:!0,message:e.output.error,stage:"error",code:e.code,success:e.success}}:{type:"complete",status:{queue:!0,message:e.success?void 0:e.output.error,stage:e.success?"complete":"error",code:e.code,progress_data:e.progress_data,eta:e.output.average_duration},data:e.success?e.output:null};case"process_starts":return{type:"update",status:{queue:!0,stage:"pending",code:e.code,size:e.rank,position:0,success:e.success}}}return{type:"none",status:{stage:"error",queue:!0}}}function ut(e,t){if(document.querySelector(`link[href='${e}']`))return Promise.resolve();const n=document.createElement("link");return n.rel="stylesheet",n.href=e,t.appendChild(n),new Promise((i,o)=>{n.addEventListener("load",()=>i()),n.addEventListener("error",()=>{console.error(`Unable to preload CSS for ${e}`),i()})})}function V(){}const bt=e=>e;function ir(e,t){for(const r in t)e[r]=t[r];return e}function or(e){return e()}function Ot(){return Object.create(null)}function ae(e){e.forEach(or)}function ue(e){return typeof e=="function"}function Te(e,t){return e!=e?t==t:e!==t||e&&typeof e=="object"||typeof e=="function"}let De;function Wr(e,t){return De||(De=document.createElement("a")),De.href=t,e===De.href}function Hr(e){return Object.keys(e).length===0}function sr(e,...t){if(e==null){for(const n of t)n(void 0);return V}const r=e.subscribe(...t);return r.unsubscribe?()=>r.unsubscribe():r}function Ve(e,t,r){e.$$.on_destroy.push(sr(t,r))}function ar(e,t,r,n){if(e){const i=lr(e,t,r,n);return e[0](i)}}function lr(e,t,r,n){return e[1]&&n?ir(r.ctx.slice(),e[1](n(t))):r.ctx}function cr(e,t,r,n){if(e[2]&&n){const i=e[2](n(r));if(t.dirty===void 0)return i;if(typeof i=="object"){const o=[],s=Math.max(t.dirty.length,i.length);for(let a=0;a32){const t=[],r=e.ctx.length/32;for(let n=0;nwindow.performance.now():()=>Date.now(),wt=dr?e=>requestAnimationFrame(e):V;const we=new Set;function pr(e){we.forEach(t=>{t.c(e)||(we.delete(t),t.f())}),we.size!==0&&wt(pr)}function $e(e){let t;return we.size===0&&wt(pr),{promise:new Promise(r=>{we.add(t={c:e,f:r})}),abort(){we.delete(t)}}}const Jr=typeof window<"u"?window:typeof globalThis<"u"?globalThis:global;"WeakMap"in Jr;function S(e,t){e.appendChild(t)}function gr(e){if(!e)return document;const t=e.getRootNode?e.getRootNode():e.ownerDocument;return t&&t.host?t:e.ownerDocument}function Zr(e){const t=B("style");return t.textContent="/* empty */",Qr(gr(e),t),t.sheet}function Qr(e,t){return S(e.head||e,t),t.sheet}function k(e,t,r){e.insertBefore(t,r||null)}function v(e){e.parentNode&&e.parentNode.removeChild(e)}function hr(e,t){for(let r=0;re.removeEventListener(t,r,n)}function ci(e){return function(t){return t.preventDefault(),e.call(this,t)}}function fi(e){return function(t){return t.stopPropagation(),e.call(this,t)}}function _(e,t,r){r==null?e.removeAttribute(t):e.getAttribute(t)!==r&&e.setAttribute(t,r)}const Kr=["width","height"];function Xr(e,t){const r=Object.getOwnPropertyDescriptors(e.__proto__);for(const n in t)t[n]==null?e.removeAttribute(n):n==="style"?e.style.cssText=t[n]:n==="__value"?e.value=e[n]=t[n]:r[n]&&r[n].set&&Kr.indexOf(n)===-1?e[n]=t[n]:_(e,n,t[n])}function Yr(e,t){Object.keys(t).forEach(r=>{$r(e,r,t[r])})}function $r(e,t,r){t in e?e[t]=typeof e[t]=="boolean"&&r===""?!0:r:_(e,t,r)}function ui(e){return/-/.test(e)?Yr:Xr}function di(e){let t;return{p(...r){t=r,t.forEach(n=>e.push(n))},r(){t.forEach(r=>e.splice(e.indexOf(r),1))}}}function pi(e){return e===""?null:+e}function en(e){return Array.from(e.childNodes)}function re(e,t){t=""+t,e.data!==t&&(e.data=t)}function gi(e,t){e.value=t??""}function Q(e,t,r,n){r==null?e.style.removeProperty(t):e.style.setProperty(t,r,n?"important":"")}let Ie;function tn(){if(Ie===void 0){Ie=!1;try{typeof window<"u"&&window.parent&&window.parent.document}catch{Ie=!0}}return Ie}function hi(e,t){getComputedStyle(e).position==="static"&&(e.style.position="relative");const n=B("iframe");n.setAttribute("style","display: block; position: absolute; top: 0; left: 0; width: 100%; height: 100%; overflow: hidden; border: 0; opacity: 0; pointer-events: none; z-index: -1;"),n.setAttribute("aria-hidden","true"),n.tabIndex=-1;const i=tn();let o;return i?(n.src="data:text/html, - - - -
- - - diff --git a/spaces/IPN/DM_pb/README.md b/spaces/IPN/DM_pb/README.md deleted file mode 100644 index f27fda8fc7af9569c4cba72a952508ff63575a42..0000000000000000000000000000000000000000 --- a/spaces/IPN/DM_pb/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: DM_pb -emoji: 🏢 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/archs/stylegan2_arch.py b/spaces/Iceclear/StableSR/StableSR/basicsr/archs/stylegan2_arch.py deleted file mode 100644 index 9ab37f5a33a2ef21641de35109c16b511a6df163..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/basicsr/archs/stylegan2_arch.py +++ /dev/null @@ -1,799 +0,0 @@ -import math -import random -import torch -from torch import nn -from torch.nn import functional as F - -from basicsr.ops.fused_act import FusedLeakyReLU, fused_leaky_relu -from basicsr.ops.upfirdn2d import upfirdn2d -from basicsr.utils.registry import ARCH_REGISTRY - - -class NormStyleCode(nn.Module): - - def forward(self, x): - """Normalize the style codes. - - Args: - x (Tensor): Style codes with shape (b, c). - - Returns: - Tensor: Normalized tensor. - """ - return x * torch.rsqrt(torch.mean(x**2, dim=1, keepdim=True) + 1e-8) - - -def make_resample_kernel(k): - """Make resampling kernel for UpFirDn. - - Args: - k (list[int]): A list indicating the 1D resample kernel magnitude. - - Returns: - Tensor: 2D resampled kernel. - """ - k = torch.tensor(k, dtype=torch.float32) - if k.ndim == 1: - k = k[None, :] * k[:, None] # to 2D kernel, outer product - # normalize - k /= k.sum() - return k - - -class UpFirDnUpsample(nn.Module): - """Upsample, FIR filter, and downsample (upsampole version). - - References: - 1. https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.upfirdn.html # noqa: E501 - 2. http://www.ece.northwestern.edu/local-apps/matlabhelp/toolbox/signal/upfirdn.html # noqa: E501 - - Args: - resample_kernel (list[int]): A list indicating the 1D resample kernel - magnitude. - factor (int): Upsampling scale factor. Default: 2. - """ - - def __init__(self, resample_kernel, factor=2): - super(UpFirDnUpsample, self).__init__() - self.kernel = make_resample_kernel(resample_kernel) * (factor**2) - self.factor = factor - - pad = self.kernel.shape[0] - factor - self.pad = ((pad + 1) // 2 + factor - 1, pad // 2) - - def forward(self, x): - out = upfirdn2d(x, self.kernel.type_as(x), up=self.factor, down=1, pad=self.pad) - return out - - def __repr__(self): - return (f'{self.__class__.__name__}(factor={self.factor})') - - -class UpFirDnDownsample(nn.Module): - """Upsample, FIR filter, and downsample (downsampole version). - - Args: - resample_kernel (list[int]): A list indicating the 1D resample kernel - magnitude. - factor (int): Downsampling scale factor. Default: 2. - """ - - def __init__(self, resample_kernel, factor=2): - super(UpFirDnDownsample, self).__init__() - self.kernel = make_resample_kernel(resample_kernel) - self.factor = factor - - pad = self.kernel.shape[0] - factor - self.pad = ((pad + 1) // 2, pad // 2) - - def forward(self, x): - out = upfirdn2d(x, self.kernel.type_as(x), up=1, down=self.factor, pad=self.pad) - return out - - def __repr__(self): - return (f'{self.__class__.__name__}(factor={self.factor})') - - -class UpFirDnSmooth(nn.Module): - """Upsample, FIR filter, and downsample (smooth version). - - Args: - resample_kernel (list[int]): A list indicating the 1D resample kernel - magnitude. - upsample_factor (int): Upsampling scale factor. Default: 1. - downsample_factor (int): Downsampling scale factor. Default: 1. - kernel_size (int): Kernel size: Default: 1. - """ - - def __init__(self, resample_kernel, upsample_factor=1, downsample_factor=1, kernel_size=1): - super(UpFirDnSmooth, self).__init__() - self.upsample_factor = upsample_factor - self.downsample_factor = downsample_factor - self.kernel = make_resample_kernel(resample_kernel) - if upsample_factor > 1: - self.kernel = self.kernel * (upsample_factor**2) - - if upsample_factor > 1: - pad = (self.kernel.shape[0] - upsample_factor) - (kernel_size - 1) - self.pad = ((pad + 1) // 2 + upsample_factor - 1, pad // 2 + 1) - elif downsample_factor > 1: - pad = (self.kernel.shape[0] - downsample_factor) + (kernel_size - 1) - self.pad = ((pad + 1) // 2, pad // 2) - else: - raise NotImplementedError - - def forward(self, x): - out = upfirdn2d(x, self.kernel.type_as(x), up=1, down=1, pad=self.pad) - return out - - def __repr__(self): - return (f'{self.__class__.__name__}(upsample_factor={self.upsample_factor}' - f', downsample_factor={self.downsample_factor})') - - -class EqualLinear(nn.Module): - """Equalized Linear as StyleGAN2. - - Args: - in_channels (int): Size of each sample. - out_channels (int): Size of each output sample. - bias (bool): If set to ``False``, the layer will not learn an additive - bias. Default: ``True``. - bias_init_val (float): Bias initialized value. Default: 0. - lr_mul (float): Learning rate multiplier. Default: 1. - activation (None | str): The activation after ``linear`` operation. - Supported: 'fused_lrelu', None. Default: None. - """ - - def __init__(self, in_channels, out_channels, bias=True, bias_init_val=0, lr_mul=1, activation=None): - super(EqualLinear, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.lr_mul = lr_mul - self.activation = activation - if self.activation not in ['fused_lrelu', None]: - raise ValueError(f'Wrong activation value in EqualLinear: {activation}' - "Supported ones are: ['fused_lrelu', None].") - self.scale = (1 / math.sqrt(in_channels)) * lr_mul - - self.weight = nn.Parameter(torch.randn(out_channels, in_channels).div_(lr_mul)) - if bias: - self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val)) - else: - self.register_parameter('bias', None) - - def forward(self, x): - if self.bias is None: - bias = None - else: - bias = self.bias * self.lr_mul - if self.activation == 'fused_lrelu': - out = F.linear(x, self.weight * self.scale) - out = fused_leaky_relu(out, bias) - else: - out = F.linear(x, self.weight * self.scale, bias=bias) - return out - - def __repr__(self): - return (f'{self.__class__.__name__}(in_channels={self.in_channels}, ' - f'out_channels={self.out_channels}, bias={self.bias is not None})') - - -class ModulatedConv2d(nn.Module): - """Modulated Conv2d used in StyleGAN2. - - There is no bias in ModulatedConv2d. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Size of the convolving kernel. - num_style_feat (int): Channel number of style features. - demodulate (bool): Whether to demodulate in the conv layer. - Default: True. - sample_mode (str | None): Indicating 'upsample', 'downsample' or None. - Default: None. - resample_kernel (list[int]): A list indicating the 1D resample kernel - magnitude. Default: (1, 3, 3, 1). - eps (float): A value added to the denominator for numerical stability. - Default: 1e-8. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - num_style_feat, - demodulate=True, - sample_mode=None, - resample_kernel=(1, 3, 3, 1), - eps=1e-8): - super(ModulatedConv2d, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.demodulate = demodulate - self.sample_mode = sample_mode - self.eps = eps - - if self.sample_mode == 'upsample': - self.smooth = UpFirDnSmooth( - resample_kernel, upsample_factor=2, downsample_factor=1, kernel_size=kernel_size) - elif self.sample_mode == 'downsample': - self.smooth = UpFirDnSmooth( - resample_kernel, upsample_factor=1, downsample_factor=2, kernel_size=kernel_size) - elif self.sample_mode is None: - pass - else: - raise ValueError(f'Wrong sample mode {self.sample_mode}, ' - "supported ones are ['upsample', 'downsample', None].") - - self.scale = 1 / math.sqrt(in_channels * kernel_size**2) - # modulation inside each modulated conv - self.modulation = EqualLinear( - num_style_feat, in_channels, bias=True, bias_init_val=1, lr_mul=1, activation=None) - - self.weight = nn.Parameter(torch.randn(1, out_channels, in_channels, kernel_size, kernel_size)) - self.padding = kernel_size // 2 - - def forward(self, x, style): - """Forward function. - - Args: - x (Tensor): Tensor with shape (b, c, h, w). - style (Tensor): Tensor with shape (b, num_style_feat). - - Returns: - Tensor: Modulated tensor after convolution. - """ - b, c, h, w = x.shape # c = c_in - # weight modulation - style = self.modulation(style).view(b, 1, c, 1, 1) - # self.weight: (1, c_out, c_in, k, k); style: (b, 1, c, 1, 1) - weight = self.scale * self.weight * style # (b, c_out, c_in, k, k) - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + self.eps) - weight = weight * demod.view(b, self.out_channels, 1, 1, 1) - - weight = weight.view(b * self.out_channels, c, self.kernel_size, self.kernel_size) - - if self.sample_mode == 'upsample': - x = x.view(1, b * c, h, w) - weight = weight.view(b, self.out_channels, c, self.kernel_size, self.kernel_size) - weight = weight.transpose(1, 2).reshape(b * c, self.out_channels, self.kernel_size, self.kernel_size) - out = F.conv_transpose2d(x, weight, padding=0, stride=2, groups=b) - out = out.view(b, self.out_channels, *out.shape[2:4]) - out = self.smooth(out) - elif self.sample_mode == 'downsample': - x = self.smooth(x) - x = x.view(1, b * c, *x.shape[2:4]) - out = F.conv2d(x, weight, padding=0, stride=2, groups=b) - out = out.view(b, self.out_channels, *out.shape[2:4]) - else: - x = x.view(1, b * c, h, w) - # weight: (b*c_out, c_in, k, k), groups=b - out = F.conv2d(x, weight, padding=self.padding, groups=b) - out = out.view(b, self.out_channels, *out.shape[2:4]) - - return out - - def __repr__(self): - return (f'{self.__class__.__name__}(in_channels={self.in_channels}, ' - f'out_channels={self.out_channels}, ' - f'kernel_size={self.kernel_size}, ' - f'demodulate={self.demodulate}, sample_mode={self.sample_mode})') - - -class StyleConv(nn.Module): - """Style conv. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Size of the convolving kernel. - num_style_feat (int): Channel number of style features. - demodulate (bool): Whether demodulate in the conv layer. Default: True. - sample_mode (str | None): Indicating 'upsample', 'downsample' or None. - Default: None. - resample_kernel (list[int]): A list indicating the 1D resample kernel - magnitude. Default: (1, 3, 3, 1). - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - num_style_feat, - demodulate=True, - sample_mode=None, - resample_kernel=(1, 3, 3, 1)): - super(StyleConv, self).__init__() - self.modulated_conv = ModulatedConv2d( - in_channels, - out_channels, - kernel_size, - num_style_feat, - demodulate=demodulate, - sample_mode=sample_mode, - resample_kernel=resample_kernel) - self.weight = nn.Parameter(torch.zeros(1)) # for noise injection - self.activate = FusedLeakyReLU(out_channels) - - def forward(self, x, style, noise=None): - # modulate - out = self.modulated_conv(x, style) - # noise injection - if noise is None: - b, _, h, w = out.shape - noise = out.new_empty(b, 1, h, w).normal_() - out = out + self.weight * noise - # activation (with bias) - out = self.activate(out) - return out - - -class ToRGB(nn.Module): - """To RGB from features. - - Args: - in_channels (int): Channel number of input. - num_style_feat (int): Channel number of style features. - upsample (bool): Whether to upsample. Default: True. - resample_kernel (list[int]): A list indicating the 1D resample kernel - magnitude. Default: (1, 3, 3, 1). - """ - - def __init__(self, in_channels, num_style_feat, upsample=True, resample_kernel=(1, 3, 3, 1)): - super(ToRGB, self).__init__() - if upsample: - self.upsample = UpFirDnUpsample(resample_kernel, factor=2) - else: - self.upsample = None - self.modulated_conv = ModulatedConv2d( - in_channels, 3, kernel_size=1, num_style_feat=num_style_feat, demodulate=False, sample_mode=None) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, x, style, skip=None): - """Forward function. - - Args: - x (Tensor): Feature tensor with shape (b, c, h, w). - style (Tensor): Tensor with shape (b, num_style_feat). - skip (Tensor): Base/skip tensor. Default: None. - - Returns: - Tensor: RGB images. - """ - out = self.modulated_conv(x, style) - out = out + self.bias - if skip is not None: - if self.upsample: - skip = self.upsample(skip) - out = out + skip - return out - - -class ConstantInput(nn.Module): - """Constant input. - - Args: - num_channel (int): Channel number of constant input. - size (int): Spatial size of constant input. - """ - - def __init__(self, num_channel, size): - super(ConstantInput, self).__init__() - self.weight = nn.Parameter(torch.randn(1, num_channel, size, size)) - - def forward(self, batch): - out = self.weight.repeat(batch, 1, 1, 1) - return out - - -@ARCH_REGISTRY.register() -class StyleGAN2Generator(nn.Module): - """StyleGAN2 Generator. - - Args: - out_size (int): The spatial size of outputs. - num_style_feat (int): Channel number of style features. Default: 512. - num_mlp (int): Layer number of MLP style layers. Default: 8. - channel_multiplier (int): Channel multiplier for large networks of - StyleGAN2. Default: 2. - resample_kernel (list[int]): A list indicating the 1D resample kernel - magnitude. A cross production will be applied to extent 1D resample - kernel to 2D resample kernel. Default: (1, 3, 3, 1). - lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01. - narrow (float): Narrow ratio for channels. Default: 1.0. - """ - - def __init__(self, - out_size, - num_style_feat=512, - num_mlp=8, - channel_multiplier=2, - resample_kernel=(1, 3, 3, 1), - lr_mlp=0.01, - narrow=1): - super(StyleGAN2Generator, self).__init__() - # Style MLP layers - self.num_style_feat = num_style_feat - style_mlp_layers = [NormStyleCode()] - for i in range(num_mlp): - style_mlp_layers.append( - EqualLinear( - num_style_feat, num_style_feat, bias=True, bias_init_val=0, lr_mul=lr_mlp, - activation='fused_lrelu')) - self.style_mlp = nn.Sequential(*style_mlp_layers) - - channels = { - '4': int(512 * narrow), - '8': int(512 * narrow), - '16': int(512 * narrow), - '32': int(512 * narrow), - '64': int(256 * channel_multiplier * narrow), - '128': int(128 * channel_multiplier * narrow), - '256': int(64 * channel_multiplier * narrow), - '512': int(32 * channel_multiplier * narrow), - '1024': int(16 * channel_multiplier * narrow) - } - self.channels = channels - - self.constant_input = ConstantInput(channels['4'], size=4) - self.style_conv1 = StyleConv( - channels['4'], - channels['4'], - kernel_size=3, - num_style_feat=num_style_feat, - demodulate=True, - sample_mode=None, - resample_kernel=resample_kernel) - self.to_rgb1 = ToRGB(channels['4'], num_style_feat, upsample=False, resample_kernel=resample_kernel) - - self.log_size = int(math.log(out_size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - self.num_latent = self.log_size * 2 - 2 - - self.style_convs = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channels = channels['4'] - # noise - for layer_idx in range(self.num_layers): - resolution = 2**((layer_idx + 5) // 2) - shape = [1, 1, resolution, resolution] - self.noises.register_buffer(f'noise{layer_idx}', torch.randn(*shape)) - # style convs and to_rgbs - for i in range(3, self.log_size + 1): - out_channels = channels[f'{2**i}'] - self.style_convs.append( - StyleConv( - in_channels, - out_channels, - kernel_size=3, - num_style_feat=num_style_feat, - demodulate=True, - sample_mode='upsample', - resample_kernel=resample_kernel, - )) - self.style_convs.append( - StyleConv( - out_channels, - out_channels, - kernel_size=3, - num_style_feat=num_style_feat, - demodulate=True, - sample_mode=None, - resample_kernel=resample_kernel)) - self.to_rgbs.append(ToRGB(out_channels, num_style_feat, upsample=True, resample_kernel=resample_kernel)) - in_channels = out_channels - - def make_noise(self): - """Make noise for noise injection.""" - device = self.constant_input.weight.device - noises = [torch.randn(1, 1, 4, 4, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2**i, 2**i, device=device)) - - return noises - - def get_latent(self, x): - return self.style_mlp(x) - - def mean_latent(self, num_latent): - latent_in = torch.randn(num_latent, self.num_style_feat, device=self.constant_input.weight.device) - latent = self.style_mlp(latent_in).mean(0, keepdim=True) - return latent - - def forward(self, - styles, - input_is_latent=False, - noise=None, - randomize_noise=True, - truncation=1, - truncation_latent=None, - inject_index=None, - return_latents=False): - """Forward function for StyleGAN2Generator. - - Args: - styles (list[Tensor]): Sample codes of styles. - input_is_latent (bool): Whether input is latent style. - Default: False. - noise (Tensor | None): Input noise or None. Default: None. - randomize_noise (bool): Randomize noise, used when 'noise' is - False. Default: True. - truncation (float): TODO. Default: 1. - truncation_latent (Tensor | None): TODO. Default: None. - inject_index (int | None): The injection index for mixing noise. - Default: None. - return_latents (bool): Whether to return style latents. - Default: False. - """ - # style codes -> latents with Style MLP layer - if not input_is_latent: - styles = [self.style_mlp(s) for s in styles] - # noises - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers # for each style conv layer - else: # use the stored noise - noise = [getattr(self.noises, f'noise{i}') for i in range(self.num_layers)] - # style truncation - if truncation < 1: - style_truncation = [] - for style in styles: - style_truncation.append(truncation_latent + truncation * (style - truncation_latent)) - styles = style_truncation - # get style latent with injection - if len(styles) == 1: - inject_index = self.num_latent - - if styles[0].ndim < 3: - # repeat latent code for all the layers - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - else: # used for encoder with different latent code for each layer - latent = styles[0] - elif len(styles) == 2: # mixing noises - if inject_index is None: - inject_index = random.randint(1, self.num_latent - 1) - latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1) - latent = torch.cat([latent1, latent2], 1) - - # main generation - out = self.constant_input(latent.shape[0]) - out = self.style_conv1(out, latent[:, 0], noise=noise[0]) - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip(self.style_convs[::2], self.style_convs[1::2], noise[1::2], - noise[2::2], self.to_rgbs): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - i += 2 - - image = skip - - if return_latents: - return image, latent - else: - return image, None - - -class ScaledLeakyReLU(nn.Module): - """Scaled LeakyReLU. - - Args: - negative_slope (float): Negative slope. Default: 0.2. - """ - - def __init__(self, negative_slope=0.2): - super(ScaledLeakyReLU, self).__init__() - self.negative_slope = negative_slope - - def forward(self, x): - out = F.leaky_relu(x, negative_slope=self.negative_slope) - return out * math.sqrt(2) - - -class EqualConv2d(nn.Module): - """Equalized Linear as StyleGAN2. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Size of the convolving kernel. - stride (int): Stride of the convolution. Default: 1 - padding (int): Zero-padding added to both sides of the input. - Default: 0. - bias (bool): If ``True``, adds a learnable bias to the output. - Default: ``True``. - bias_init_val (float): Bias initialized value. Default: 0. - """ - - def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, bias=True, bias_init_val=0): - super(EqualConv2d, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.stride = stride - self.padding = padding - self.scale = 1 / math.sqrt(in_channels * kernel_size**2) - - self.weight = nn.Parameter(torch.randn(out_channels, in_channels, kernel_size, kernel_size)) - if bias: - self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val)) - else: - self.register_parameter('bias', None) - - def forward(self, x): - out = F.conv2d( - x, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return (f'{self.__class__.__name__}(in_channels={self.in_channels}, ' - f'out_channels={self.out_channels}, ' - f'kernel_size={self.kernel_size},' - f' stride={self.stride}, padding={self.padding}, ' - f'bias={self.bias is not None})') - - -class ConvLayer(nn.Sequential): - """Conv Layer used in StyleGAN2 Discriminator. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Kernel size. - downsample (bool): Whether downsample by a factor of 2. - Default: False. - resample_kernel (list[int]): A list indicating the 1D resample - kernel magnitude. A cross production will be applied to - extent 1D resample kernel to 2D resample kernel. - Default: (1, 3, 3, 1). - bias (bool): Whether with bias. Default: True. - activate (bool): Whether use activateion. Default: True. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - downsample=False, - resample_kernel=(1, 3, 3, 1), - bias=True, - activate=True): - layers = [] - # downsample - if downsample: - layers.append( - UpFirDnSmooth(resample_kernel, upsample_factor=1, downsample_factor=2, kernel_size=kernel_size)) - stride = 2 - self.padding = 0 - else: - stride = 1 - self.padding = kernel_size // 2 - # conv - layers.append( - EqualConv2d( - in_channels, out_channels, kernel_size, stride=stride, padding=self.padding, bias=bias - and not activate)) - # activation - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channels)) - else: - layers.append(ScaledLeakyReLU(0.2)) - - super(ConvLayer, self).__init__(*layers) - - -class ResBlock(nn.Module): - """Residual block used in StyleGAN2 Discriminator. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - resample_kernel (list[int]): A list indicating the 1D resample - kernel magnitude. A cross production will be applied to - extent 1D resample kernel to 2D resample kernel. - Default: (1, 3, 3, 1). - """ - - def __init__(self, in_channels, out_channels, resample_kernel=(1, 3, 3, 1)): - super(ResBlock, self).__init__() - - self.conv1 = ConvLayer(in_channels, in_channels, 3, bias=True, activate=True) - self.conv2 = ConvLayer( - in_channels, out_channels, 3, downsample=True, resample_kernel=resample_kernel, bias=True, activate=True) - self.skip = ConvLayer( - in_channels, out_channels, 1, downsample=True, resample_kernel=resample_kernel, bias=False, activate=False) - - def forward(self, x): - out = self.conv1(x) - out = self.conv2(out) - skip = self.skip(x) - out = (out + skip) / math.sqrt(2) - return out - - -@ARCH_REGISTRY.register() -class StyleGAN2Discriminator(nn.Module): - """StyleGAN2 Discriminator. - - Args: - out_size (int): The spatial size of outputs. - channel_multiplier (int): Channel multiplier for large networks of - StyleGAN2. Default: 2. - resample_kernel (list[int]): A list indicating the 1D resample kernel - magnitude. A cross production will be applied to extent 1D resample - kernel to 2D resample kernel. Default: (1, 3, 3, 1). - stddev_group (int): For group stddev statistics. Default: 4. - narrow (float): Narrow ratio for channels. Default: 1.0. - """ - - def __init__(self, out_size, channel_multiplier=2, resample_kernel=(1, 3, 3, 1), stddev_group=4, narrow=1): - super(StyleGAN2Discriminator, self).__init__() - - channels = { - '4': int(512 * narrow), - '8': int(512 * narrow), - '16': int(512 * narrow), - '32': int(512 * narrow), - '64': int(256 * channel_multiplier * narrow), - '128': int(128 * channel_multiplier * narrow), - '256': int(64 * channel_multiplier * narrow), - '512': int(32 * channel_multiplier * narrow), - '1024': int(16 * channel_multiplier * narrow) - } - - log_size = int(math.log(out_size, 2)) - - conv_body = [ConvLayer(3, channels[f'{out_size}'], 1, bias=True, activate=True)] - - in_channels = channels[f'{out_size}'] - for i in range(log_size, 2, -1): - out_channels = channels[f'{2**(i - 1)}'] - conv_body.append(ResBlock(in_channels, out_channels, resample_kernel)) - in_channels = out_channels - self.conv_body = nn.Sequential(*conv_body) - - self.final_conv = ConvLayer(in_channels + 1, channels['4'], 3, bias=True, activate=True) - self.final_linear = nn.Sequential( - EqualLinear( - channels['4'] * 4 * 4, channels['4'], bias=True, bias_init_val=0, lr_mul=1, activation='fused_lrelu'), - EqualLinear(channels['4'], 1, bias=True, bias_init_val=0, lr_mul=1, activation=None), - ) - self.stddev_group = stddev_group - self.stddev_feat = 1 - - def forward(self, x): - out = self.conv_body(x) - - b, c, h, w = out.shape - # concatenate a group stddev statistics to out - group = min(b, self.stddev_group) # Minibatch must be divisible by (or smaller than) group_size - stddev = out.view(group, -1, self.stddev_feat, c // self.stddev_feat, h, w) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, h, w) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - out = out.view(b, -1) - out = self.final_linear(out) - - return out diff --git a/spaces/Illumotion/Koboldcpp/examples/finetune/convert-finetune-checkpoint-to-gguf.py b/spaces/Illumotion/Koboldcpp/examples/finetune/convert-finetune-checkpoint-to-gguf.py deleted file mode 100644 index 96d6633ed7d5ee0cd38d93c33c1439614b5bfa51..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/finetune/convert-finetune-checkpoint-to-gguf.py +++ /dev/null @@ -1,489 +0,0 @@ -#!/usr/bin/env python3 -# finetune checkpoint --> gguf conversion - -import argparse -import gguf -import os -import struct -import sys -import numpy as np -from pathlib import Path - -# gguf constants -LLM_KV_OPTIMIZER_TYPE = "optimizer.type" -LLM_KV_OPTIMIZER_TYPE_ADAM = "adam" -LLM_KV_OPTIMIZER_TYPE_LBFGS = "lbfgs" -LLM_KV_OPTIMIZER_FILE_VERSION = "optimizer.file_version" -LLM_KV_OPTIMIZER_CONVERGENCE_PAST_COUNT = "optimizer.convergence_past_count" -LLM_KV_OPTIMIZER_PARAMETER_COUNT = "optimizer.parameter_count" -LLM_KV_OPTIMIZER_ITERATION_COUNT = "optimizer.iteration_count" -LLM_KV_OPTIMIZER_JUST_INITIALIZED = "optimizer.just_initialized" -LLM_KV_OPTIMIZER_ADAM_BEST_LOSS = "optimizer.adam.best_loss" -LLM_KV_OPTIMIZER_ADAM_PREVIOUS_LOSS = "optimizer.adam.previous_loss" -LLM_KV_OPTIMIZER_ADAM_NO_IMPROVEMENT_COUNT = "optimizer.adam.no_improvement_count" -LLM_KV_OPTIMIZER_LBFGS_APPROX_HESSIAN_COUNT = "optimizer.lbfgs.approx_hessian_count" -LLM_KV_OPTIMIZER_LBFGS_BEST_LOSS = "optimizer.lbfgs.best_loss" -LLM_KV_OPTIMIZER_LBFGS_LINE_SEARCH_STEP = "optimizer.lbfgs.line_search_step" -LLM_KV_OPTIMIZER_LBFGS_LINE_SEARCH_J = "optimizer.lbfgs.line_search_j" -LLM_KV_OPTIMIZER_LBFGS_LINE_SEARCH_K = "optimizer.lbfgs.line_search_k" -LLM_KV_OPTIMIZER_LBFGS_LINE_SEARCH_END = "optimizer.lbfgs.line_search_end" -LLM_KV_OPTIMIZER_LBFGS_NO_IMPROVEMENT_COUNT = "optimizer.lbfgs.no_improvement_count" - -LLM_TENSOR_OPTIMIZER_ADAM_FIRST_MOMENTS = "optimizer.adam.first_moments" -LLM_TENSOR_OPTIMIZER_ADAM_SECOND_MOMENTS = "optimizer.adam.second_moments" -LLM_TENSOR_OPTIMIZER_ADAM_PAST_LOSS_VALUES = "optimizer.adam.past_loss_values" - -LLM_TENSOR_OPTIMIZER_LBFGS_CURRENT_PARAMETERS = "optimizer.lbfgs.current_parameters" -LLM_TENSOR_OPTIMIZER_LBFGS_PREVIOUS_PARAMETERS = "optimizer.lbfgs.previous_parameters" -LLM_TENSOR_OPTIMIZER_LBFGS_CURRENT_GRADIENTS = "optimizer.lbfgs.current_gradients" -LLM_TENSOR_OPTIMIZER_LBFGS_PREVIOUS_GRADIENTS = "optimizer.lbfgs.previous_gradients" -LLM_TENSOR_OPTIMIZER_LBFGS_SEARCH_DIRECTION = "optimizer.lbfgs.search_direction" -LLM_TENSOR_OPTIMIZER_LBFGS_PAST_LOSS_VALUES = "optimizer.lbfgs.past_loss_values" -LLM_TENSOR_OPTIMIZER_LBFGS_MEMORY_ALPHA = "optimizer.lbfgs.memory_alpha" -LLM_TENSOR_OPTIMIZER_LBFGS_MEMORY_YS = "optimizer.lbfgs.memory_ys" -LLM_TENSOR_OPTIMIZER_LBFGS_MEMORY_S = "optimizer.lbfgs.memory_s" -LLM_TENSOR_OPTIMIZER_LBFGS_MEMORY_Y = "optimizer.lbfgs.memory_y" - -LLM_KV_TRAINING_TYPE_TRAIN_MODEL = "train_model" -LLM_KV_TRAINING_TYPE_FINETUNE_LORA = "finetune_lora" -LLM_KV_TRAINING_TYPE = "training.type" -LLM_KV_TRAINING_FILE_VERSION = "training.file_version" -LLM_KV_TRAINING_ITERATION_COUNT = "training.iteration_count" -LLM_KV_TRAINING_SAMPLE_COUNT = "training.sample_count" -LLM_KV_TRAINING_TOKEN_COUNT = "training.token_count" - -LLM_KV_TRAINING_LORA_RANK_TOKEN_EMBD = "training.lora.rank.token_embd" -LLM_KV_TRAINING_LORA_RANK_OUTPUT_NORM = "training.lora.rank.output_norm" -LLM_KV_TRAINING_LORA_RANK_OUTPUT = "training.lora.rank.output" -LLM_KV_TRAINING_LORA_RANK_ATTN_NORM = "training.lora.rank.attn_norm" -LLM_KV_TRAINING_LORA_RANK_ATTN_Q = "training.lora.rank.attn_q" -LLM_KV_TRAINING_LORA_RANK_ATTN_K = "training.lora.rank.attn_k" -LLM_KV_TRAINING_LORA_RANK_ATTN_V = "training.lora.rank.attn_v" -LLM_KV_TRAINING_LORA_RANK_ATTN_OUT = "training.lora.rank.attn_output" -LLM_KV_TRAINING_LORA_RANK_FFN_NORM = "training.lora.rank.ffn_norm" -LLM_KV_TRAINING_LORA_RANK_FFN_GATE = "training.lora.rank.ffn_gate" -LLM_KV_TRAINING_LORA_RANK_FFN_DOWN = "training.lora.rank.ffn_down" -LLM_KV_TRAINING_LORA_RANK_FFN_UP = "training.lora.rank.ffn_up" - -class Tensor: - def __init__(self, dtype='f', ne=None): - if ne is None: - ne = [] - self.dtype = dtype - self.ne = ne - self.nbytes = 0 - if self.dtype == 'f': - if len(self.ne) == 0: - self.nbytes = 0 - else: - self.nbytes = int(np.product(self.ne)) * 4 - else: - raise ValueError(f"Unhandled data type '{self.dtype}'") - - def load(self, data, offset): - nd = struct.unpack(' 0 else []) - - self.lbfgs_x = Tensor('f', [self.nx]) - self.lbfgs_xp = Tensor('f', [self.nx]) - self.lbfgs_g = Tensor('f', [self.nx]) - self.lbfgs_gp = Tensor('f', [self.nx]) - self.lbfgs_d = Tensor('f', [self.nx]) - self.lbfgs_pf = Tensor('f', [self.past] if self.past > 0 else []) - self.lbfgs_lmal = Tensor('f', [self.lbfgs_m]) - self.lbfgs_lmys = Tensor('f', [self.lbfgs_m]) - self.lbfgs_lms = Tensor('f', [self.nx, self.lbfgs_m]) - self.lbfgs_lmy = Tensor('f', [self.nx, self.lbfgs_m]) - - # forgot to save type in version 1: - # guess self.type from number of remaining bytes - size_type_0 = 12 + sum([t.max_storage_size() for t in - [self.adam_m, self.adam_v] - +([self.adam_pf] if (self.past > 0) else [])]) - size_type_1 = 24 + sum([t.max_storage_size() for t in - [self.lbfgs_x, self.lbfgs_xp, self.lbfgs_g, - self.lbfgs_gp, self.lbfgs_d, self.lbfgs_pf, - self.lbfgs_lmal, self.lbfgs_lmys, - self.lbfgs_lms, self.lbfgs_lmy] - +([self.lbfgs_pf] if (self.past > 0) else [])]) - # due to alignment padding the size might not by exact - # but the difference in size for both types is significant, - # so we can just use whichever is closest - remaining = len(data) - offset - if abs(remaining - size_type_0) < abs(remaining - size_type_1): - self.type = 0 - else: - self.type = 1 - - if self.type == 0: - offset = self.adam_m.load(data, offset) - offset = self.adam_v.load(data, offset) - offset = self.adam_pf.load(data,offset) - - self.adam_fx_best = struct.unpack(' 0: - self.adam_pf.save_gguf(gguf_writer, name=LLM_TENSOR_OPTIMIZER_ADAM_PAST_LOSS_VALUES) - - elif self.type == 1: - gguf_writer.add_string(LLM_KV_OPTIMIZER_TYPE, LLM_KV_OPTIMIZER_TYPE_LBFGS) - gguf_writer.add_uint32(LLM_KV_OPTIMIZER_LBFGS_APPROX_HESSIAN_COUNT, self.lbfgs_m) - gguf_writer.add_float32(LLM_KV_OPTIMIZER_LBFGS_BEST_LOSS, self.lbfgs_fx_best) - gguf_writer.add_float32(LLM_KV_OPTIMIZER_LBFGS_LINE_SEARCH_STEP, self.lbfgs_step) - gguf_writer.add_int32(LLM_KV_OPTIMIZER_LBFGS_LINE_SEARCH_J, self.lbfgs_j) - gguf_writer.add_int32(LLM_KV_OPTIMIZER_LBFGS_LINE_SEARCH_K, self.lbfgs_k) - gguf_writer.add_int32(LLM_KV_OPTIMIZER_LBFGS_LINE_SEARCH_END, self.lbfgs_end) - gguf_writer.add_uint32(LLM_KV_OPTIMIZER_LBFGS_NO_IMPROVEMENT_COUNT, self.lbfgs_n_no_improvement) - - self.lbfgs_x.save_gguf(gguf_writer, name=LLM_TENSOR_OPTIMIZER_LBFGS_CURRENT_PARAMETERS) - self.lbfgs_xp.save_gguf(gguf_writer, name=LLM_TENSOR_OPTIMIZER_LBFGS_PREVIOUS_PARAMETERS) - self.lbfgs_g.save_gguf(gguf_writer, name=LLM_TENSOR_OPTIMIZER_LBFGS_CURRENT_GRADIENTS) - self.lbfgs_gp.save_gguf(gguf_writer, name=LLM_TENSOR_OPTIMIZER_LBFGS_PREVIOUS_GRADIENTS) - self.lbfgs_d.save_gguf(gguf_writer, name=LLM_TENSOR_OPTIMIZER_LBFGS_SEARCH_DIRECTION) - if self.past > 0: - self.lbfgs_pf.save_gguf(gguf_writer, name=LLM_TENSOR_OPTIMIZER_LBFGS_PAST_LOSS_VALUES) - self.lbfgs_lmal.save_gguf(gguf_writer, name=LLM_TENSOR_OPTIMIZER_LBFGS_MEMORY_ALPHA) - self.lbfgs_lmys.save_gguf(gguf_writer, name=LLM_TENSOR_OPTIMIZER_LBFGS_MEMORY_YS) - self.lbfgs_lms.save_gguf(gguf_writer, name=LLM_TENSOR_OPTIMIZER_LBFGS_MEMORY_S) - self.lbfgs_lmy.save_gguf(gguf_writer, name=LLM_TENSOR_OPTIMIZER_LBFGS_MEMORY_Y) - else: - raise ValueError('Unknown optimizer type') - -class LoraParams: - def __init__(self): - pass - - def load(self, data, offset): - self.n_rank_attention_norm = struct.unpack(' str: - """Fix invalid escape sequences in JSON strings. - - Args: - json_to_load (str): The JSON string. - error_message (str): The error message from the JSONDecodeError - exception. - - Returns: - str: The JSON string with invalid escape sequences fixed. - """ - while error_message.startswith("Invalid \\escape"): - bad_escape_location = extract_char_position(error_message) - json_to_load = ( - json_to_load[:bad_escape_location] + json_to_load[bad_escape_location + 1 :] - ) - try: - json.loads(json_to_load) - return json_to_load - except json.JSONDecodeError as e: - if CFG.debug_mode: - print("json loads error - fix invalid escape", e) - error_message = str(e) - return json_to_load - - -def balance_braces(json_string: str) -> Optional[str]: - """ - Balance the braces in a JSON string. - - Args: - json_string (str): The JSON string. - - Returns: - str: The JSON string with braces balanced. - """ - - open_braces_count = json_string.count("{") - close_braces_count = json_string.count("}") - - while open_braces_count > close_braces_count: - json_string += "}" - close_braces_count += 1 - - while close_braces_count > open_braces_count: - json_string = json_string.rstrip("}") - close_braces_count -= 1 - - with contextlib.suppress(json.JSONDecodeError): - json.loads(json_string) - return json_string - - -def add_quotes_to_property_names(json_string: str) -> str: - """ - Add quotes to property names in a JSON string. - - Args: - json_string (str): The JSON string. - - Returns: - str: The JSON string with quotes added to property names. - """ - - def replace_func(match: re.Match) -> str: - return f'"{match[1]}":' - - property_name_pattern = re.compile(r"(\w+):") - corrected_json_string = property_name_pattern.sub(replace_func, json_string) - - try: - json.loads(corrected_json_string) - return corrected_json_string - except json.JSONDecodeError as e: - raise e - - -def correct_json(json_to_load: str) -> str: - """ - Correct common JSON errors. - Args: - json_to_load (str): The JSON string. - """ - - try: - if CFG.debug_mode: - print("json", json_to_load) - json.loads(json_to_load) - return json_to_load - except json.JSONDecodeError as e: - if CFG.debug_mode: - print("json loads error", e) - error_message = str(e) - if error_message.startswith("Invalid \\escape"): - json_to_load = fix_invalid_escape(json_to_load, error_message) - if error_message.startswith( - "Expecting property name enclosed in double quotes" - ): - json_to_load = add_quotes_to_property_names(json_to_load) - try: - json.loads(json_to_load) - return json_to_load - except json.JSONDecodeError as e: - if CFG.debug_mode: - print("json loads error - add quotes", e) - error_message = str(e) - if balanced_str := balance_braces(json_to_load): - return balanced_str - return json_to_load diff --git a/spaces/JosephusCheung/ACertainsStrategyTalk/12.html b/spaces/JosephusCheung/ACertainsStrategyTalk/12.html deleted file mode 100644 index 1cf9b63c00876073f6d017127c11ec45a11c3032..0000000000000000000000000000000000000000 --- a/spaces/JosephusCheung/ACertainsStrategyTalk/12.html +++ /dev/null @@ -1,94 +0,0 @@ - - - - - - - - - -
- - - - - - - - - - - - - - - -
-
- - - - -
Future Outlook -Certains Certains Certains -To be honest, I have already achieved my satisfactory model, and -the remaining work will only be labor-intensive if no new transfor- -mative methods is introduced. -Such work as adding all the data from Danbooru2022 to the -dataset and continuing to train, I think this is something anyone -with GPU power and spare time can do. I will further improve it -when I have more leisure time.
- - - -
- - diff --git a/spaces/JunchuanYu/SegRS/segment_anything/automatic_mask_generator.py b/spaces/JunchuanYu/SegRS/segment_anything/automatic_mask_generator.py deleted file mode 100644 index 23264971b7ff5aa0b4f499ade7773b68dce984b6..0000000000000000000000000000000000000000 --- a/spaces/JunchuanYu/SegRS/segment_anything/automatic_mask_generator.py +++ /dev/null @@ -1,372 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from torchvision.ops.boxes import batched_nms, box_area # type: ignore - -from typing import Any, Dict, List, Optional, Tuple - -from .modeling import Sam -from .predictor import SamPredictor -from .utils.amg import ( - MaskData, - area_from_rle, - batch_iterator, - batched_mask_to_box, - box_xyxy_to_xywh, - build_all_layer_point_grids, - calculate_stability_score, - coco_encode_rle, - generate_crop_boxes, - is_box_near_crop_edge, - mask_to_rle_pytorch, - remove_small_regions, - rle_to_mask, - uncrop_boxes_xyxy, - uncrop_masks, - uncrop_points, -) - - -class SamAutomaticMaskGenerator: - def __init__( - self, - model: Sam, - points_per_side: Optional[int] = 32, - points_per_batch: int = 64, - pred_iou_thresh: float = 0.88, - stability_score_thresh: float = 0.95, - stability_score_offset: float = 1.0, - box_nms_thresh: float = 0.7, - crop_n_layers: int = 0, - crop_nms_thresh: float = 0.7, - crop_overlap_ratio: float = 512 / 1500, - crop_n_points_downscale_factor: int = 1, - point_grids: Optional[List[np.ndarray]] = None, - min_mask_region_area: int = 0, - output_mode: str = "binary_mask", - ) -> None: - """ - Using a SAM model, generates masks for the entire image. - Generates a grid of point prompts over the image, then filters - low quality and duplicate masks. The default settings are chosen - for SAM with a ViT-H backbone. - - Arguments: - model (Sam): The SAM model to use for mask prediction. - points_per_side (int or None): The number of points to be sampled - along one side of the image. The total number of points is - points_per_side**2. If None, 'point_grids' must provide explicit - point sampling. - points_per_batch (int): Sets the number of points run simultaneously - by the model. Higher numbers may be faster but use more GPU memory. - pred_iou_thresh (float): A filtering threshold in [0,1], using the - model's predicted mask quality. - stability_score_thresh (float): A filtering threshold in [0,1], using - the stability of the mask under changes to the cutoff used to binarize - the model's mask predictions. - stability_score_offset (float): The amount to shift the cutoff when - calculated the stability score. - box_nms_thresh (float): The box IoU cutoff used by non-maximal - suppression to filter duplicate masks. - crops_n_layers (int): If >0, mask prediction will be run again on - crops of the image. Sets the number of layers to run, where each - layer has 2**i_layer number of image crops. - crops_nms_thresh (float): The box IoU cutoff used by non-maximal - suppression to filter duplicate masks between different crops. - crop_overlap_ratio (float): Sets the degree to which crops overlap. - In the first crop layer, crops will overlap by this fraction of - the image length. Later layers with more crops scale down this overlap. - crop_n_points_downscale_factor (int): The number of points-per-side - sampled in layer n is scaled down by crop_n_points_downscale_factor**n. - point_grids (list(np.ndarray) or None): A list over explicit grids - of points used for sampling, normalized to [0,1]. The nth grid in the - list is used in the nth crop layer. Exclusive with points_per_side. - min_mask_region_area (int): If >0, postprocessing will be applied - to remove disconnected regions and holes in masks with area smaller - than min_mask_region_area. Requires opencv. - output_mode (str): The form masks are returned in. Can be 'binary_mask', - 'uncompressed_rle', or 'coco_rle'. 'coco_rle' requires pycocotools. - For large resolutions, 'binary_mask' may consume large amounts of - memory. - """ - - assert (points_per_side is None) != ( - point_grids is None - ), "Exactly one of points_per_side or point_grid must be provided." - if points_per_side is not None: - self.point_grids = build_all_layer_point_grids( - points_per_side, - crop_n_layers, - crop_n_points_downscale_factor, - ) - elif point_grids is not None: - self.point_grids = point_grids - else: - raise ValueError("Can't have both points_per_side and point_grid be None.") - - assert output_mode in [ - "binary_mask", - "uncompressed_rle", - "coco_rle", - ], f"Unknown output_mode {output_mode}." - if output_mode == "coco_rle": - from pycocotools import mask as mask_utils # type: ignore # noqa: F401 - - if min_mask_region_area > 0: - import cv2 # type: ignore # noqa: F401 - - self.predictor = SamPredictor(model) - self.points_per_batch = points_per_batch - self.pred_iou_thresh = pred_iou_thresh - self.stability_score_thresh = stability_score_thresh - self.stability_score_offset = stability_score_offset - self.box_nms_thresh = box_nms_thresh - self.crop_n_layers = crop_n_layers - self.crop_nms_thresh = crop_nms_thresh - self.crop_overlap_ratio = crop_overlap_ratio - self.crop_n_points_downscale_factor = crop_n_points_downscale_factor - self.min_mask_region_area = min_mask_region_area - self.output_mode = output_mode - - @torch.no_grad() - def generate(self, image: np.ndarray) -> List[Dict[str, Any]]: - """ - Generates masks for the given image. - - Arguments: - image (np.ndarray): The image to generate masks for, in HWC uint8 format. - - Returns: - list(dict(str, any)): A list over records for masks. Each record is - a dict containing the following keys: - segmentation (dict(str, any) or np.ndarray): The mask. If - output_mode='binary_mask', is an array of shape HW. Otherwise, - is a dictionary containing the RLE. - bbox (list(float)): The box around the mask, in XYWH format. - area (int): The area in pixels of the mask. - predicted_iou (float): The model's own prediction of the mask's - quality. This is filtered by the pred_iou_thresh parameter. - point_coords (list(list(float))): The point coordinates input - to the model to generate this mask. - stability_score (float): A measure of the mask's quality. This - is filtered on using the stability_score_thresh parameter. - crop_box (list(float)): The crop of the image used to generate - the mask, given in XYWH format. - """ - - # Generate masks - mask_data = self._generate_masks(image) - - # Filter small disconnected regions and holes in masks - if self.min_mask_region_area > 0: - mask_data = self.postprocess_small_regions( - mask_data, - self.min_mask_region_area, - max(self.box_nms_thresh, self.crop_nms_thresh), - ) - - # Encode masks - if self.output_mode == "coco_rle": - mask_data["segmentations"] = [coco_encode_rle(rle) for rle in mask_data["rles"]] - elif self.output_mode == "binary_mask": - mask_data["segmentations"] = [rle_to_mask(rle) for rle in mask_data["rles"]] - else: - mask_data["segmentations"] = mask_data["rles"] - - # Write mask records - curr_anns = [] - for idx in range(len(mask_data["segmentations"])): - ann = { - "segmentation": mask_data["segmentations"][idx], - "area": area_from_rle(mask_data["rles"][idx]), - "bbox": box_xyxy_to_xywh(mask_data["boxes"][idx]).tolist(), - "predicted_iou": mask_data["iou_preds"][idx].item(), - "point_coords": [mask_data["points"][idx].tolist()], - "stability_score": mask_data["stability_score"][idx].item(), - "crop_box": box_xyxy_to_xywh(mask_data["crop_boxes"][idx]).tolist(), - } - curr_anns.append(ann) - - return curr_anns - - def _generate_masks(self, image: np.ndarray) -> MaskData: - orig_size = image.shape[:2] - crop_boxes, layer_idxs = generate_crop_boxes( - orig_size, self.crop_n_layers, self.crop_overlap_ratio - ) - - # Iterate over image crops - data = MaskData() - for crop_box, layer_idx in zip(crop_boxes, layer_idxs): - crop_data = self._process_crop(image, crop_box, layer_idx, orig_size) - data.cat(crop_data) - - # Remove duplicate masks between crops - if len(crop_boxes) > 1: - # Prefer masks from smaller crops - scores = 1 / box_area(data["crop_boxes"]) - scores = scores.to(data["boxes"].device) - keep_by_nms = batched_nms( - data["boxes"].float(), - scores, - torch.zeros(len(data["boxes"])), # categories - iou_threshold=self.crop_nms_thresh, - ) - data.filter(keep_by_nms) - - data.to_numpy() - return data - - def _process_crop( - self, - image: np.ndarray, - crop_box: List[int], - crop_layer_idx: int, - orig_size: Tuple[int, ...], - ) -> MaskData: - # Crop the image and calculate embeddings - x0, y0, x1, y1 = crop_box - cropped_im = image[y0:y1, x0:x1, :] - cropped_im_size = cropped_im.shape[:2] - self.predictor.set_image(cropped_im) - - # Get points for this crop - points_scale = np.array(cropped_im_size)[None, ::-1] - points_for_image = self.point_grids[crop_layer_idx] * points_scale - - # Generate masks for this crop in batches - data = MaskData() - for (points,) in batch_iterator(self.points_per_batch, points_for_image): - batch_data = self._process_batch(points, cropped_im_size, crop_box, orig_size) - data.cat(batch_data) - del batch_data - self.predictor.reset_image() - - # Remove duplicates within this crop. - keep_by_nms = batched_nms( - data["boxes"].float(), - data["iou_preds"], - torch.zeros(len(data["boxes"])), # categories - iou_threshold=self.box_nms_thresh, - ) - data.filter(keep_by_nms) - - # Return to the original image frame - data["boxes"] = uncrop_boxes_xyxy(data["boxes"], crop_box) - data["points"] = uncrop_points(data["points"], crop_box) - data["crop_boxes"] = torch.tensor([crop_box for _ in range(len(data["rles"]))]) - - return data - - def _process_batch( - self, - points: np.ndarray, - im_size: Tuple[int, ...], - crop_box: List[int], - orig_size: Tuple[int, ...], - ) -> MaskData: - orig_h, orig_w = orig_size - - # Run model on this batch - transformed_points = self.predictor.transform.apply_coords(points, im_size) - in_points = torch.as_tensor(transformed_points, device=self.predictor.device) - in_labels = torch.ones(in_points.shape[0], dtype=torch.int, device=in_points.device) - masks, iou_preds, _ = self.predictor.predict_torch( - in_points[:, None, :], - in_labels[:, None], - multimask_output=True, - return_logits=True, - ) - - # Serialize predictions and store in MaskData - data = MaskData( - masks=masks.flatten(0, 1), - iou_preds=iou_preds.flatten(0, 1), - points=torch.as_tensor(points.repeat(masks.shape[1], axis=0)), - ) - del masks - - # Filter by predicted IoU - if self.pred_iou_thresh > 0.0: - keep_mask = data["iou_preds"] > self.pred_iou_thresh - data.filter(keep_mask) - - # Calculate stability score - data["stability_score"] = calculate_stability_score( - data["masks"], self.predictor.model.mask_threshold, self.stability_score_offset - ) - if self.stability_score_thresh > 0.0: - keep_mask = data["stability_score"] >= self.stability_score_thresh - data.filter(keep_mask) - - # Threshold masks and calculate boxes - data["masks"] = data["masks"] > self.predictor.model.mask_threshold - data["boxes"] = batched_mask_to_box(data["masks"]) - - # Filter boxes that touch crop boundaries - keep_mask = ~is_box_near_crop_edge(data["boxes"], crop_box, [0, 0, orig_w, orig_h]) - if not torch.all(keep_mask): - data.filter(keep_mask) - - # Compress to RLE - data["masks"] = uncrop_masks(data["masks"], crop_box, orig_h, orig_w) - data["rles"] = mask_to_rle_pytorch(data["masks"]) - del data["masks"] - - return data - - @staticmethod - def postprocess_small_regions( - mask_data: MaskData, min_area: int, nms_thresh: float - ) -> MaskData: - """ - Removes small disconnected regions and holes in masks, then reruns - box NMS to remove any new duplicates. - - Edits mask_data in place. - - Requires open-cv as a dependency. - """ - if len(mask_data["rles"]) == 0: - return mask_data - - # Filter small disconnected regions and holes - new_masks = [] - scores = [] - for rle in mask_data["rles"]: - mask = rle_to_mask(rle) - - mask, changed = remove_small_regions(mask, min_area, mode="holes") - unchanged = not changed - mask, changed = remove_small_regions(mask, min_area, mode="islands") - unchanged = unchanged and not changed - - new_masks.append(torch.as_tensor(mask).unsqueeze(0)) - # Give score=0 to changed masks and score=1 to unchanged masks - # so NMS will prefer ones that didn't need postprocessing - scores.append(float(unchanged)) - - # Recalculate boxes and remove any new duplicates - masks = torch.cat(new_masks, dim=0) - boxes = batched_mask_to_box(masks) - keep_by_nms = batched_nms( - boxes.float(), - torch.as_tensor(scores), - torch.zeros(len(boxes)), # categories - iou_threshold=nms_thresh, - ) - - # Only recalculate RLEs for masks that have changed - for i_mask in keep_by_nms: - if scores[i_mask] == 0.0: - mask_torch = masks[i_mask].unsqueeze(0) - mask_data["rles"][i_mask] = mask_to_rle_pytorch(mask_torch)[0] - mask_data["boxes"][i_mask] = boxes[i_mask] # update res directly - mask_data.filter(keep_by_nms) - - return mask_data diff --git a/spaces/KJMAN678/text_generate/README.md b/spaces/KJMAN678/text_generate/README.md deleted file mode 100644 index bca8e87d0666d181aa209371c748bf816f5359f9..0000000000000000000000000000000000000000 --- a/spaces/KJMAN678/text_generate/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Text Generation by GPT-3 -emoji: 💩 -colorFrom: yellow -colorTo: purple -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/KPCGD/bingo/src/components/user-menu.tsx b/spaces/KPCGD/bingo/src/components/user-menu.tsx deleted file mode 100644 index 9bd1edc9cf9f39b63629b021f0c1186b1a7c1341..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/src/components/user-menu.tsx +++ /dev/null @@ -1,113 +0,0 @@ -'use client' - -import { useEffect, useState } from 'react' -import Image from 'next/image' -import { toast } from 'react-hot-toast' -import { Button } from '@/components/ui/button' -import pkg from '../../package.json' -import { - DropdownMenu, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuSeparator, - DropdownMenuTrigger -} from '@/components/ui/dropdown-menu' -import { IconCopy, IconExternalLink, IconGitHub } from '@/components/ui/icons' -import SettingIcon from '@/assets/images/settings.svg' -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' - -export function UserMenu() { - const [host, setHost] = useState('') - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - useEffect(() => { - setHost(location.host) - }, []) - - useEffect(() => { - if (isCopied) { - toast.success('复制成功') - } - }, [isCopied]) - return ( -
- - - - - - - location.href='#dialog="settings"' - } - className="cursor-pointer" - > - 设置用户 - - - - location.href='#dialog="voice"' - } - className="cursor-pointer" - > - 语音设置 - - - - - 开源地址 - - - - - - - - 托管地址 - 🤗 - - - - - - - 复制站点 - - - - - -
版本信息 {pkg.version}
-
- - -
站点域名
-
copyToClipboard(host)} className="flex gap-1 text-xs text-zinc-500 cursor-pointer"> - {host} -
-
-
-
-
- ) -} diff --git a/spaces/Kangarroar/ApplioRVC-Inference/infer/modules/vc/modules.py b/spaces/Kangarroar/ApplioRVC-Inference/infer/modules/vc/modules.py deleted file mode 100644 index 458cfbe860b23bdd8f07abc2934443e6b8b01c3a..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/infer/modules/vc/modules.py +++ /dev/null @@ -1,526 +0,0 @@ -import os, sys -import traceback -import logging -now_dir = os.getcwd() -sys.path.append(now_dir) -logger = logging.getLogger(__name__) -import lib.globals.globals as rvc_globals -import numpy as np -import soundfile as sf -import torch -from io import BytesIO -from infer.lib.audio import load_audio -from infer.lib.audio import wav2 -from infer.lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from infer.modules.vc.pipeline import Pipeline -from infer.modules.vc.utils import * -import time -import scipy.io.wavfile as wavfile - -def note_to_hz(note_name): - SEMITONES = {'C': -9, 'C#': -8, 'D': -7, 'D#': -6, 'E': -5, 'F': -4, 'F#': -3, 'G': -2, 'G#': -1, 'A': 0, 'A#': 1, 'B': 2} - pitch_class, octave = note_name[:-1], int(note_name[-1]) - semitone = SEMITONES[pitch_class] - note_number = 12 * (octave - 4) + semitone - frequency = 440.0 * (2.0 ** (1.0/12)) ** note_number - return frequency - -class VC: - def __init__(self, config): - self.n_spk = None - self.tgt_sr = None - self.net_g = None - self.pipeline = None - self.cpt = None - self.version = None - self.if_f0 = None - self.version = None - self.hubert_model = None - - self.config = config - - def get_vc(self, sid, *to_return_protect): - logger.info("Get sid: " + sid) - - to_return_protect0 = { - "visible": self.if_f0 != 0, - "value": to_return_protect[0] - if self.if_f0 != 0 and to_return_protect - else 0.5, - "__type__": "update", - } - to_return_protect1 = { - "visible": self.if_f0 != 0, - "value": to_return_protect[1] - if self.if_f0 != 0 and to_return_protect - else 0.33, - "__type__": "update", - } - - if not sid: - if self.hubert_model is not None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的 - logger.info("Clean model cache") - del ( - self.net_g, - self.n_spk, - self.vc, - self.hubert_model, - self.tgt_sr, - ) # ,cpt - self.hubert_model = ( - self.net_g - ) = self.n_spk = self.vc = self.hubert_model = self.tgt_sr = None - if torch.cuda.is_available(): - torch.cuda.empty_cache() - ###楼下不这么折腾清理不干净 - self.if_f0 = self.cpt.get("f0", 1) - self.version = self.cpt.get("version", "v1") - if self.version == "v1": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs256NSFsid( - *self.cpt["config"], is_half=self.config.is_half - ) - else: - self.net_g = SynthesizerTrnMs256NSFsid_nono(*self.cpt["config"]) - elif self.version == "v2": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs768NSFsid( - *self.cpt["config"], is_half=self.config.is_half - ) - else: - self.net_g = SynthesizerTrnMs768NSFsid_nono(*self.cpt["config"]) - del self.net_g, self.cpt - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return ( - {"visible": False, "__type__": "update"}, - { - "visible": True, - "value": to_return_protect0, - "__type__": "update", - }, - { - "visible": True, - "value": to_return_protect1, - "__type__": "update", - }, - "", - "", - ) - #person = f'{os.getenv("weight_root")}/{sid}' - person = f'{sid}' - #logger.info(f"Loading: {person}") - logger.info(f"Loading...") - self.cpt = torch.load(person, map_location="cpu") - self.tgt_sr = self.cpt["config"][-1] - self.cpt["config"][-3] = self.cpt["weight"]["emb_g.weight"].shape[0] # n_spk - self.if_f0 = self.cpt.get("f0", 1) - self.version = self.cpt.get("version", "v1") - - synthesizer_class = { - ("v1", 1): SynthesizerTrnMs256NSFsid, - ("v1", 0): SynthesizerTrnMs256NSFsid_nono, - ("v2", 1): SynthesizerTrnMs768NSFsid, - ("v2", 0): SynthesizerTrnMs768NSFsid_nono, - } - - self.net_g = synthesizer_class.get( - (self.version, self.if_f0), SynthesizerTrnMs256NSFsid - )(*self.cpt["config"], is_half=self.config.is_half) - - del self.net_g.enc_q - - self.net_g.load_state_dict(self.cpt["weight"], strict=False) - self.net_g.eval().to(self.config.device) - if self.config.is_half: - self.net_g = self.net_g.half() - else: - self.net_g = self.net_g.float() - - self.pipeline = Pipeline(self.tgt_sr, self.config) - n_spk = self.cpt["config"][-3] - index = {"value": get_index_path_from_model(sid), "__type__": "update"} - logger.info("Select index: " + index["value"]) - - return ( - ( - {"visible": False, "maximum": n_spk, "__type__": "update"}, - to_return_protect0, - to_return_protect1 - ) - if to_return_protect - else {"visible": False, "maximum": n_spk, "__type__": "update"} - ) - - - def vc_single( - self, - sid, - input_audio_path0, - input_audio_path1, - f0_up_key, - f0_file, - f0_method, - file_index, - file_index2, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - crepe_hop_length, - f0_min, - note_min, - f0_max, - note_max, - f0_autotune, - ): - global total_time - total_time = 0 - start_time = time.time() - if not input_audio_path0 and not input_audio_path1: - return "You need to upload an audio", None - - if (not os.path.exists(input_audio_path0)) and (not os.path.exists(os.path.join(now_dir, input_audio_path0))): - return "Audio was not properly selected or doesn't exist", None - - input_audio_path1 = input_audio_path1 or input_audio_path0 - print(f"\nStarting inference for '{os.path.basename(input_audio_path1)}'") - print("-------------------") - f0_up_key = int(f0_up_key) - if rvc_globals.NotesOrHertz and f0_method != 'rmvpe': - f0_min = note_to_hz(note_min) if note_min else 50 - f0_max = note_to_hz(note_max) if note_max else 1100 - print(f"Converted Min pitch: freq - {f0_min}\n" - f"Converted Max pitch: freq - {f0_max}") - else: - f0_min = f0_min or 50 - f0_max = f0_max or 1100 - try: - input_audio_path1 = input_audio_path1 or input_audio_path0 - print(f"Attempting to load {input_audio_path1}....") - audio = load_audio(file=input_audio_path1, - sr=16000, - DoFormant=rvc_globals.DoFormant, - Quefrency=rvc_globals.Quefrency, - Timbre=rvc_globals.Timbre) - - audio_max = np.abs(audio).max() / 0.95 - if audio_max > 1: - audio /= audio_max - times = [0, 0, 0] - - if self.hubert_model is None: - self.hubert_model = load_hubert(self.config) - - try: - self.if_f0 = self.cpt.get("f0", 1) - except NameError: - message = "Model was not properly selected" - print(message) - return message, None - - file_index = ( - ( - file_index.strip(" ") - .strip('"') - .strip("\n") - .strip('"') - .strip(" ") - .replace("trained", "added") - ) - if file_index != "" - else file_index2 - ) # 防止小白写错,自动帮他替换掉 - - try: - audio_opt = self.pipeline.pipeline( - self.hubert_model, - self.net_g, - sid, - audio, - input_audio_path1, - times, - f0_up_key, - f0_method, - file_index, - index_rate, - self.if_f0, - filter_radius, - self.tgt_sr, - resample_sr, - rms_mix_rate, - self.version, - protect, - crepe_hop_length, - f0_autotune, - f0_file=f0_file, - f0_min=f0_min, - f0_max=f0_max - ) - except AssertionError: - message = "Mismatching index version detected (v1 with v2, or v2 with v1)." - print(message) - return message, None - except NameError: - message = "RVC libraries are still loading. Please try again in a few seconds." - print(message) - return message, None - - if self.tgt_sr != resample_sr >= 16000: - self.tgt_sr = resample_sr - index_info = ( - "Index:\n%s." % file_index - if os.path.exists(file_index) - else "Index not used." - ) - end_time = time.time() - total_time = end_time - start_time - - output_folder = "audio-outputs" - os.makedirs(output_folder, exist_ok=True) - output_filename = "generated_audio_{}.wav" - output_count = 1 - while True: - current_output_path = os.path.join(output_folder, output_filename.format(output_count)) - if not os.path.exists(current_output_path): - break - output_count += 1 - - wavfile.write(current_output_path, self.tgt_sr, audio_opt) - print(f"Generated audio saved to: {current_output_path}") - return f"Success.\n {index_info}\nTime:\n npy:{times[0]}, f0:{times[1]}, infer:{times[2]}\nTotal Time: {total_time} seconds", (self.tgt_sr, audio_opt) - except: - info = traceback.format_exc() - logger.warn(info) - return info, (None, None) - - def vc_single_dont_save( - self, - sid, - input_audio_path0, - input_audio_path1, - f0_up_key, - f0_file, - f0_method, - file_index, - file_index2, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - crepe_hop_length, - f0_min, - note_min, - f0_max, - note_max, - f0_autotune, - ): - global total_time - total_time = 0 - start_time = time.time() - if not input_audio_path0 and not input_audio_path1: - return "You need to upload an audio", None - - if (not os.path.exists(input_audio_path0)) and (not os.path.exists(os.path.join(now_dir, input_audio_path0))): - return "Audio was not properly selected or doesn't exist", None - - input_audio_path1 = input_audio_path1 or input_audio_path0 - print(f"\nStarting inference for '{os.path.basename(input_audio_path1)}'") - print("-------------------") - f0_up_key = int(f0_up_key) - if rvc_globals.NotesOrHertz and f0_method != 'rmvpe': - f0_min = note_to_hz(note_min) if note_min else 50 - f0_max = note_to_hz(note_max) if note_max else 1100 - print(f"Converted Min pitch: freq - {f0_min}\n" - f"Converted Max pitch: freq - {f0_max}") - else: - f0_min = f0_min or 50 - f0_max = f0_max or 1100 - try: - input_audio_path1 = input_audio_path1 or input_audio_path0 - print(f"Attempting to load {input_audio_path1}....") - audio = load_audio(file=input_audio_path1, - sr=16000, - DoFormant=rvc_globals.DoFormant, - Quefrency=rvc_globals.Quefrency, - Timbre=rvc_globals.Timbre) - - audio_max = np.abs(audio).max() / 0.95 - if audio_max > 1: - audio /= audio_max - times = [0, 0, 0] - - if self.hubert_model is None: - self.hubert_model = load_hubert(self.config) - - try: - self.if_f0 = self.cpt.get("f0", 1) - except NameError: - message = "Model was not properly selected" - print(message) - return message, None - - file_index = ( - ( - file_index.strip(" ") - .strip('"') - .strip("\n") - .strip('"') - .strip(" ") - .replace("trained", "added") - ) - if file_index != "" - else file_index2 - ) # 防止小白写错,自动帮他替换掉 - - try: - audio_opt = self.pipeline.pipeline( - self.hubert_model, - self.net_g, - sid, - audio, - input_audio_path1, - times, - f0_up_key, - f0_method, - file_index, - index_rate, - self.if_f0, - filter_radius, - self.tgt_sr, - resample_sr, - rms_mix_rate, - self.version, - protect, - crepe_hop_length, - f0_autotune, - f0_file=f0_file, - f0_min=f0_min, - f0_max=f0_max - ) - except AssertionError: - message = "Mismatching index version detected (v1 with v2, or v2 with v1)." - print(message) - return message, None - except NameError: - message = "RVC libraries are still loading. Please try again in a few seconds." - print(message) - return message, None - - if self.tgt_sr != resample_sr >= 16000: - self.tgt_sr = resample_sr - index_info = ( - "Index:\n%s." % file_index - if os.path.exists(file_index) - else "Index not used." - ) - end_time = time.time() - total_time = end_time - start_time - - return f"Success.\n {index_info}\nTime:\n npy:{times[0]}, f0:{times[1]}, infer:{times[2]}\nTotal Time: {total_time} seconds", (self.tgt_sr, audio_opt) - except: - info = traceback.format_exc() - logger.warn(info) - return info, (None, None) - - - def vc_multi( - self, - sid, - dir_path, - opt_root, - paths, - f0_up_key, - f0_method, - file_index, - file_index2, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - format1, - crepe_hop_length, - f0_min, - note_min, - f0_max, - note_max, - f0_autotune, - ): - if rvc_globals.NotesOrHertz and f0_method != 'rmvpe': - f0_min = note_to_hz(note_min) if note_min else 50 - f0_max = note_to_hz(note_max) if note_max else 1100 - print(f"Converted Min pitch: freq - {f0_min}\n" - f"Converted Max pitch: freq - {f0_max}") - else: - f0_min = f0_min or 50 - f0_max = f0_max or 1100 - try: - dir_path = ( - dir_path.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - opt_root = opt_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - os.makedirs(opt_root, exist_ok=True) - try: - if dir_path != "": - paths = [ - os.path.join(dir_path, name) for name in os.listdir(dir_path) - ] - else: - paths = [path.name for path in paths] - except: - traceback.print_exc() - paths = [path.name for path in paths] - infos = [] - for path in paths: - info, opt = self.vc_single( - sid, - path, - f0_up_key, - None, - f0_method, - file_index, - file_index2, - # file_big_npy, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - ) - if "Success" in info: - try: - tgt_sr, audio_opt = opt - if format1 in ["wav", "flac"]: - sf.write( - "%s/%s.%s" - % (opt_root, os.path.basename(path), format1), - audio_opt, - tgt_sr, - ) - else: - path = "%s/%s.%s" % (opt_root, os.path.basename(path), format1) - with BytesIO() as wavf: - sf.write( - wavf, - audio_opt, - tgt_sr, - format="wav" - ) - wavf.seek(0, 0) - with open(path, "wb") as outf: - wav2(wavf, outf, format1) - except: - info += traceback.format_exc() - infos.append("%s->%s" % (os.path.basename(path), info)) - yield "\n".join(infos) - yield "\n".join(infos) - except: - yield traceback.format_exc() diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/log_mel.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/log_mel.py deleted file mode 100644 index 1e3b87d7ec73516ad79ee6eb1943cffb70bb52fa..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/log_mel.py +++ /dev/null @@ -1,74 +0,0 @@ -import librosa -import numpy as np -import torch -from typing import Tuple - -from .nets_utils import make_pad_mask - - -class LogMel(torch.nn.Module): - """Convert STFT to fbank feats - - The arguments is same as librosa.filters.mel - - Args: - fs: number > 0 [scalar] sampling rate of the incoming signal - n_fft: int > 0 [scalar] number of FFT components - n_mels: int > 0 [scalar] number of Mel bands to generate - fmin: float >= 0 [scalar] lowest frequency (in Hz) - fmax: float >= 0 [scalar] highest frequency (in Hz). - If `None`, use `fmax = fs / 2.0` - htk: use HTK formula instead of Slaney - norm: {None, 1, np.inf} [scalar] - if 1, divide the triangular mel weights by the width of the mel band - (area normalization). Otherwise, leave all the triangles aiming for - a peak value of 1.0 - - """ - - def __init__( - self, - fs: int = 16000, - n_fft: int = 512, - n_mels: int = 80, - fmin: float = None, - fmax: float = None, - htk: bool = False, - norm=1, - ): - super().__init__() - - fmin = 0 if fmin is None else fmin - fmax = fs / 2 if fmax is None else fmax - _mel_options = dict( - sr=fs, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax, htk=htk, norm=norm - ) - self.mel_options = _mel_options - - # Note(kamo): The mel matrix of librosa is different from kaldi. - melmat = librosa.filters.mel(**_mel_options) - # melmat: (D2, D1) -> (D1, D2) - self.register_buffer("melmat", torch.from_numpy(melmat.T).float()) - inv_mel = np.linalg.pinv(melmat) - self.register_buffer("inv_melmat", torch.from_numpy(inv_mel.T).float()) - - def extra_repr(self): - return ", ".join(f"{k}={v}" for k, v in self.mel_options.items()) - - def forward( - self, feat: torch.Tensor, ilens: torch.Tensor = None, - ) -> Tuple[torch.Tensor, torch.Tensor]: - # feat: (B, T, D1) x melmat: (D1, D2) -> mel_feat: (B, T, D2) - mel_feat = torch.matmul(feat, self.melmat) - - logmel_feat = (mel_feat + 1e-20).log() - # Zero padding - if ilens is not None: - logmel_feat = logmel_feat.masked_fill( - make_pad_mask(ilens, logmel_feat, 1), 0.0 - ) - else: - ilens = feat.new_full( - [feat.size(0)], fill_value=feat.size(1), dtype=torch.long - ) - return logmel_feat, ilens diff --git a/spaces/KyanChen/FunSR/datasets/cnn_sr_wrappers.py b/spaces/KyanChen/FunSR/datasets/cnn_sr_wrappers.py deleted file mode 100644 index f604e2d1566dd609d11e62898f4bef1dc7e8e1d8..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/FunSR/datasets/cnn_sr_wrappers.py +++ /dev/null @@ -1,75 +0,0 @@ -import functools -import os.path -import random -import math - -import torchvision.transforms -from PIL import Image -import numpy as np -import torch -from einops import rearrange -from torch.utils.data import Dataset -from torchvision import transforms -from torchvision.transforms import InterpolationMode - -from datasets import register -import torchvision.transforms -from utils import to_pixel_samples, to_coordinates - - - -def resize_fn(img, size): - return transforms.ToTensor()( - transforms.Resize(size, Image.BICUBIC)( - transforms.ToPILImage()(img))) - - -@register('cnn_fixed_scale_sr_warp') -class CNNFixedScaleSRWarp(Dataset): - def __init__(self, dataset, scale_ratio, patch_size=48, - augment=False, val_mode=False, test_mode=False, - vis_continuous=False): - self.dataset = dataset - self.augment = augment - self.scale_ratio = scale_ratio - self.hr_size = int(patch_size * scale_ratio) - self.test_mode = test_mode - self.val_mode = val_mode - self.patch_size = patch_size - self.vis_continuous = vis_continuous - - def __len__(self): - return len(self.dataset) - - def __getitem__(self, idx): - img_hr, file_name = self.dataset[idx] - class_name = os.path.basename(os.path.dirname(file_name)) - file_name = os.path.basename(file_name).split('.')[0] - - if self.vis_continuous: - img_lr = transforms.Resize(self.patch_size, InterpolationMode.BICUBIC)( - transforms.CenterCrop(4*self.patch_size)(img_hr)) - - # img_hr: 3xHxW - if self.test_mode: - img_hr = transforms.CenterCrop(self.hr_size)(img_hr) - else: - img_hr = transforms.RandomCrop(self.hr_size)(img_hr) - - if not self.vis_continuous: - img_lr = transforms.Resize(self.patch_size, InterpolationMode.BICUBIC)(img_hr) - - if self.augment and not self.test_mode: - if random.random() < 0.5: - img_lr = img_lr.flip(-1) - img_hr = img_hr.flip(-1) - if random.random() < 0.5: - img_lr = img_lr.flip(-2) - img_hr = img_hr.flip(-2) - - return { - 'img': img_lr, - 'gt': img_hr, - 'class_name': class_name, - 'filename': file_name - } diff --git a/spaces/LDY/ImageToLine/README.md b/spaces/LDY/ImageToLine/README.md deleted file mode 100644 index 7dd11a38ff8df7c2e06facda07bf8035b086e371..0000000000000000000000000000000000000000 --- a/spaces/LDY/ImageToLine/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ImageToLine -emoji: 💩 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.0.24 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/LanguageBind/LanguageBind/data/build_datasets.py b/spaces/LanguageBind/LanguageBind/data/build_datasets.py deleted file mode 100644 index 1c2275c159a3d8ba0cb0dd0ee6b84a41eeabcfb7..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/data/build_datasets.py +++ /dev/null @@ -1,174 +0,0 @@ -import os -import time -from dataclasses import dataclass -from multiprocessing import Value - -import torch -from torch.utils.data import DataLoader -from torch.utils.data.distributed import DistributedSampler - -from data.base_datasets import VAT_dataset -from data.new_loadvat import get_wds_dataset -from open_clip import get_tokenizer -from open_clip.factory import HF_HUB_PREFIX - - -class SharedEpoch: - def __init__(self, epoch: int = 0): - self.shared_epoch = Value('i', epoch) - - def set_value(self, epoch): - self.shared_epoch.value = epoch - - def get_value(self): - return self.shared_epoch.value - -@dataclass -class DataInfo: - dataloader: DataLoader - sampler: DistributedSampler = None - shared_epoch: SharedEpoch = None - - def set_epoch(self, epoch): - if self.shared_epoch is not None: - self.shared_epoch.set_value(epoch) - if self.sampler is not None and isinstance(self.sampler, DistributedSampler): - self.sampler.set_epoch(epoch) - -def get_VAT_dataset(args): - dataset = VAT_dataset(args) - num_samples = len(dataset) - sampler = DistributedSampler(dataset) if args.distributed else None - shuffle = sampler is None - - dataloader = DataLoader( - dataset, - batch_size=args.batch_size, - # prefetch_factor=2, - # persistent_workers=True, - shuffle=shuffle, - num_workers=args.workers, - pin_memory=True, - sampler=sampler, - drop_last=True, - ) - dataloader.num_samples = num_samples - dataloader.num_batches = len(dataloader) - - return DataInfo(dataloader, sampler) - -def get_data(args, epoch=0): - data = {} - - if args.do_train: - if args.train_data.endswith(".json"): - data[f"{args.clip_type}_pt"] = get_VAT_dataset(args) - elif args.train_data.endswith(".tar"): - data[f"{args.clip_type}_pt"] = get_wds_dataset(args, is_train=True, epoch=epoch) - else: - raise NameError - - if args.do_eval: - temp_batch_size = args.batch_size - args.batch_size = 8 if args.val_vl_ret_data else 16 - data_root = "/apdcephfs_cq3/share_1311970/downstream_datasets/VideoTextRetrieval/vtRetdata" - if args.val_vl_ret_data: - data["vl_ret"] = [] - for val_vl_ret_data in args.val_vl_ret_data: - if val_vl_ret_data == "msrvtt": - args.train_csv = os.path.join(f'{data_root}/MSRVTT/MSRVTT_train.9k.csv') - args.val_csv = os.path.join(f'{data_root}/MSRVTT/MSRVTT_JSFUSION_test.csv') - args.data_path = os.path.join(f'{data_root}/MSRVTT/MSRVTT_data.json') - args.features_path = os.path.join(f'{data_root}/MSRVTT/MSRVTT_Videos') - elif val_vl_ret_data == "msvd": - args.data_path = os.path.join(f'{data_root}/MSVD') - args.features_path = os.path.join(f'{data_root}/MSVD/MSVD_Videos') - elif val_vl_ret_data == "activity": - args.data_path = os.path.join(f'{data_root}/ActivityNet') - args.features_path = os.path.join(f'{data_root}/ActivityNet/Videos/Activity_Videos') - elif val_vl_ret_data == "didemo": - args.data_path = os.path.join(f'{data_root}/Didemo') - args.features_path = os.path.join(f'{data_root}/Didemo/videos') - else: - raise NameError - - args.batch_size_val = args.batch_size if args.batch_size_val == 0 else args.batch_size_val - args.max_frames = args.num_frames - args.num_thread_reader = args.workers - args.slice_framepos = 2 # "0: cut from head frames; 1: cut from tail frames; 2: extract frames uniformly." - - from vl_ret.data_dataloaders import DATALOADER_DICT - - tokenizer = get_tokenizer(HF_HUB_PREFIX + args.model, cache_dir=args.cache_dir) - test_dataloader, test_length = None, 0 - if DATALOADER_DICT[val_vl_ret_data]["test"] is not None: - test_dataloader, test_length = DATALOADER_DICT[val_vl_ret_data]["test"](args, tokenizer) - - if DATALOADER_DICT[val_vl_ret_data]["val"] is not None: - val_dataloader, val_length = DATALOADER_DICT[val_vl_ret_data]["val"](args, tokenizer, subset="val") - else: - val_dataloader, val_length = test_dataloader, test_length - ## report validation results if the ["test"] is None - if test_dataloader is None: - test_dataloader, test_length = val_dataloader, val_length - - data["vl_ret"].append({val_vl_ret_data: test_dataloader}) - - if args.val_v_cls_data: - from v_cls import get_video_cls_dataloader - args.data_set = args.val_v_cls_data - args.num_workers = args.workers - args.num_sample = 1 # no repeat - data["v_cls"] = get_video_cls_dataloader(args) - - - if args.val_a_cls_data: - data["a_cls"] = [] - data_root = "/apdcephfs_cq3/share_1311970/downstream_datasets/Audio" - temp_val_a_cls_data = args.val_a_cls_data - for val_a_cls_data in temp_val_a_cls_data: - from a_cls.datasets import get_audio_dataset - args.val_a_cls_data = val_a_cls_data - args.audio_data_path = os.path.join(data_root, f'{val_a_cls_data.lower()}/test') - data['a_cls'].append({val_a_cls_data: get_audio_dataset(args)}) - args.val_a_cls_data = temp_val_a_cls_data - - if args.imagenet_val is not None: - from i_cls.datasets import get_imagenet - data['i_cls'] = {} - data['i_cls']["imagenet-val"] = get_imagenet(args, "val") - if args.imagenet_v2 is not None: - from i_cls.datasets import get_imagenet - if data.get('i_cls', None) is None: - data['i_cls'] = {} - data['i_cls']["imagenet-v2"] = get_imagenet(args, "v2") - - if args.val_d_cls_data: - data["d_cls"] = [] - data_root = "/apdcephfs_cq3/share_1311970/downstream_datasets/Depth" - temp_val_d_cls_data = args.val_d_cls_data - for val_d_cls_data in temp_val_d_cls_data: - from d_cls.datasets import get_depth_dataset - args.val_d_cls_data = val_d_cls_data - args.depth_data_path = os.path.join(data_root, f'{val_d_cls_data.lower()}/data/val') - data['d_cls'].append({val_d_cls_data: get_depth_dataset(args)}) - args.val_d_cls_data = temp_val_d_cls_data - - - if args.val_t_cls_data: - data["t_cls"] = [] - data_root = "/apdcephfs_cq3/share_1311970/downstream_datasets/Thermal" - temp_val_t_cls_data = args.val_t_cls_data - for val_t_cls_data in temp_val_t_cls_data: - from t_cls.datasets import get_thermal_dataset - args.val_t_cls_data = val_t_cls_data - args.thermal_data_path = os.path.join(data_root, f'{val_t_cls_data.lower()}/val') - data['t_cls'].append({val_t_cls_data: get_thermal_dataset(args)}) - args.val_t_cls_data = temp_val_t_cls_data - - args.batch_size = temp_batch_size - - return data - - - diff --git a/spaces/Lbin123/Lbingo/cloudflare/worker.js b/spaces/Lbin123/Lbingo/cloudflare/worker.js deleted file mode 100644 index e0debd750615f1329b2c72fbce73e1b9291f7137..0000000000000000000000000000000000000000 --- a/spaces/Lbin123/Lbingo/cloudflare/worker.js +++ /dev/null @@ -1,18 +0,0 @@ -const TRAGET_HOST='hf4all-bingo.hf.space' // 请将此域名改成你自己的,域名信息在设置》站点域名查看。 - -export default { - async fetch(request) { - const uri = new URL(request.url); - if (uri.protocol === 'http:') { - uri.protocol = 'https:'; - return new Response('', { - status: 301, - headers: { - location: uri.toString(), - }, - }) - } - uri.host = TRAGET_HOST - return fetch(new Request(uri.toString(), request)); - }, -}; diff --git a/spaces/Lbin123/Lbingo/src/pages/api/image.ts b/spaces/Lbin123/Lbingo/src/pages/api/image.ts deleted file mode 100644 index 4b894bea86050c0f3888cc56f60c0cb7f8b57cfc..0000000000000000000000000000000000000000 --- a/spaces/Lbin123/Lbingo/src/pages/api/image.ts +++ /dev/null @@ -1,40 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { debug } from '@/lib/isomorphic' -import { createHeaders } from '@/lib/utils' -import { createImage } from '@/lib/bots/bing/utils' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - const { prompt, id } = req.query - if (!prompt) { - return res.json({ - result: { - value: 'Image', - message: 'No Prompt' - } - }) - } - try { - const headers = createHeaders(req.cookies, { - IMAGE_BING_COOKIE: process.env.IMAGE_BING_COOKIE - }) - - debug('headers', headers) - const response = await createImage(String(prompt), String(id), { - ...headers, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - }) - res.writeHead(200, { - 'Content-Type': 'text/plain; charset=UTF-8', - }) - return res.end(response) - } catch (e) { - return res.json({ - result: { - value: 'Error', - message: `${e}` - } - }) - } -} diff --git a/spaces/Lewislou/Lewislou-cell-seg-sribd/overlay.py b/spaces/Lewislou/Lewislou-cell-seg-sribd/overlay.py deleted file mode 100644 index bc16d7749f7aee48d866815eca33fd9bfed1698f..0000000000000000000000000000000000000000 --- a/spaces/Lewislou/Lewislou-cell-seg-sribd/overlay.py +++ /dev/null @@ -1,116 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -###overlay -import cv2 -import math -import random -import colorsys -import numpy as np -import itertools -import matplotlib.pyplot as plt -from matplotlib import cm -import os -import scipy.io as io -def get_bounding_box(img): - """Get bounding box coordinate information.""" - rows = np.any(img, axis=1) - cols = np.any(img, axis=0) - rmin, rmax = np.where(rows)[0][[0, -1]] - cmin, cmax = np.where(cols)[0][[0, -1]] - # due to python indexing, need to add 1 to max - # else accessing will be 1px in the box, not out - rmax += 1 - cmax += 1 - return [rmin, rmax, cmin, cmax] -#### -def colorize(ch, vmin, vmax): - """Will clamp value value outside the provided range to vmax and vmin.""" - cmap = plt.get_cmap("jet") - ch = np.squeeze(ch.astype("float32")) - vmin = vmin if vmin is not None else ch.min() - vmax = vmax if vmax is not None else ch.max() - ch[ch > vmax] = vmax # clamp value - ch[ch < vmin] = vmin - ch = (ch - vmin) / (vmax - vmin + 1.0e-16) - # take RGB from RGBA heat map - ch_cmap = (cmap(ch)[..., :3] * 255).astype("uint8") - return ch_cmap - - -#### -def random_colors(N, bright=True): - """Generate random colors. - - To get visually distinct colors, generate them in HSV space then - convert to RGB. - """ - brightness = 1.0 if bright else 0.7 - hsv = [(i / N, 1, brightness) for i in range(N)] - colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv)) - random.shuffle(colors) - return colors - - -#### -def visualize_instances_map( - input_image, inst_map, type_map=None, type_colour=None, line_thickness=2 -): - """Overlays segmentation results on image as contours. - - Args: - input_image: input image - inst_map: instance mask with unique value for every object - type_map: type mask with unique value for every class - type_colour: a dict of {type : colour} , `type` is from 0-N - and `colour` is a tuple of (R, G, B) - line_thickness: line thickness of contours - - Returns: - overlay: output image with segmentation overlay as contours - """ - overlay = np.copy((input_image).astype(np.uint8)) - - inst_list = list(np.unique(inst_map)) # get list of instances - inst_list.remove(0) # remove background - - inst_rng_colors = random_colors(len(inst_list)) - inst_rng_colors = np.array(inst_rng_colors) * 255 - inst_rng_colors = inst_rng_colors.astype(np.uint8) - - for inst_idx, inst_id in enumerate(inst_list): - inst_map_mask = np.array(inst_map == inst_id, np.uint8) # get single object - y1, y2, x1, x2 = get_bounding_box(inst_map_mask) - y1 = y1 - 2 if y1 - 2 >= 0 else y1 - x1 = x1 - 2 if x1 - 2 >= 0 else x1 - x2 = x2 + 2 if x2 + 2 <= inst_map.shape[1] - 1 else x2 - y2 = y2 + 2 if y2 + 2 <= inst_map.shape[0] - 1 else y2 - inst_map_crop = inst_map_mask[y1:y2, x1:x2] - contours_crop = cv2.findContours( - inst_map_crop, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE - ) - # only has 1 instance per map, no need to check #contour detected by opencv - #print(contours_crop) - contours_crop = np.squeeze( - contours_crop[0][0].astype("int32") - ) # * opencv protocol format may break - - if len(contours_crop.shape) == 1: - contours_crop = contours_crop.reshape(1,-1) - #print(contours_crop.shape) - contours_crop += np.asarray([[x1, y1]]) # index correction - if type_map is not None: - type_map_crop = type_map[y1:y2, x1:x2] - type_id = np.unique(type_map_crop).max() # non-zero - inst_colour = type_colour[type_id] - else: - inst_colour = (inst_rng_colors[inst_idx]).tolist() - cv2.drawContours(overlay, [contours_crop], -1, inst_colour, line_thickness) - return overlay - - -# In[ ]: - - - - diff --git a/spaces/Lianjd/stock_dashboard/backtrader/indicators/basicops.py b/spaces/Lianjd/stock_dashboard/backtrader/indicators/basicops.py deleted file mode 100644 index 6abf0dbb4a2bd3e2d0a0eb1bbf8e7658047dff8a..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/indicators/basicops.py +++ /dev/null @@ -1,494 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -import functools -import math -import operator - -from ..utils.py3 import map, range - -from . import Indicator - - -class PeriodN(Indicator): - ''' - Base class for indicators which take a period (__init__ has to be called - either via super or explicitly) - - This class has no defined lines - ''' - params = (('period', 1),) - - def __init__(self): - super(PeriodN, self).__init__() - self.addminperiod(self.p.period) - - -class OperationN(PeriodN): - ''' - Calculates "func" for a given period - - Serves as a base for classes that work with a period and can express the - logic in a callable object - - Note: - Base classes must provide a "func" attribute which is a callable - - Formula: - - line = func(data, period) - ''' - def next(self): - self.line[0] = self.func(self.data.get(size=self.p.period)) - - def once(self, start, end): - dst = self.line.array - src = self.data.array - period = self.p.period - func = self.func - - for i in range(start, end): - dst[i] = func(src[i - period + 1: i + 1]) - - -class BaseApplyN(OperationN): - ''' - Base class for ApplyN and others which may take a ``func`` as a parameter - but want to define the lines in the indicator. - - Calculates ``func`` for a given period where func is given as a parameter, - aka named argument or ``kwarg`` - - Formula: - - lines[0] = func(data, period) - - Any extra lines defined beyond the first (index 0) are not calculated - ''' - params = (('func', None),) - - def __init__(self): - self.func = self.p.func - super(BaseApplyN, self).__init__() - - -class ApplyN(BaseApplyN): - ''' - Calculates ``func`` for a given period - - Formula: - - line = func(data, period) - ''' - lines = ('apply',) - - -class Highest(OperationN): - ''' - Calculates the highest value for the data in a given period - - Uses the built-in ``max`` for the calculation - - Formula: - - highest = max(data, period) - ''' - alias = ('MaxN',) - lines = ('highest',) - func = max - - -class Lowest(OperationN): - ''' - Calculates the lowest value for the data in a given period - - Uses the built-in ``min`` for the calculation - - Formula: - - lowest = min(data, period) - ''' - alias = ('MinN',) - lines = ('lowest',) - func = min - - -class ReduceN(OperationN): - ''' - Calculates the Reduced value of the ``period`` data points applying - ``function`` - - Uses the built-in ``reduce`` for the calculation plus the ``func`` that - subclassess define - - Formula: - - reduced = reduce(function(data, period)), initializer=initializer) - - Notes: - - - In order to mimic the python ``reduce``, this indicator takes a - ``function`` non-named argument as the 1st argument, unlike other - Indicators which take only named arguments - ''' - lines = ('reduced',) - func = functools.reduce - - def __init__(self, function, **kwargs): - if 'initializer' not in kwargs: - self.func = functools.partial(self.func, function) - else: - self.func = functools.partial(self.func, function, - initializer=kwargs['initializer']) - - super(ReduceN, self).__init__() - - -class SumN(OperationN): - ''' - Calculates the Sum of the data values over a given period - - Uses ``math.fsum`` for the calculation rather than the built-in ``sum`` to - avoid precision errors - - Formula: - - sumn = sum(data, period) - ''' - lines = ('sumn',) - func = math.fsum - - -class AnyN(OperationN): - ''' - Has a value of ``True`` (stored as ``1.0`` in the lines) if *any* of the - values in the ``period`` evaluates to non-zero (ie: ``True``) - - Uses the built-in ``any`` for the calculation - - Formula: - - anyn = any(data, period) - ''' - lines = ('anyn',) - func = any - - -class AllN(OperationN): - ''' - Has a value of ``True`` (stored as ``1.0`` in the lines) if *all* of the - values in the ``period`` evaluates to non-zero (ie: ``True``) - - Uses the built-in ``all`` for the calculation - - Formula: - - alln = all(data, period) - ''' - lines = ('alln',) - func = all - - -class FindFirstIndex(OperationN): - ''' - Returns the index of the last data that satisfies equality with the - condition generated by the parameter _evalfunc - - Note: - Returned indexes look backwards. 0 is the current index and 1 is - the previous bar. - - Formula: - - index = first for which data[index] == _evalfunc(data) - ''' - lines = ('index',) - params = (('_evalfunc', None),) - - def func(self, iterable): - m = self.p._evalfunc(iterable) - return next(i for i, v in enumerate(reversed(iterable)) if v == m) - - -class FindFirstIndexHighest(FindFirstIndex): - ''' - Returns the index of the first data that is the highest in the period - - Note: - Returned indexes look backwards. 0 is the current index and 1 is - the previous bar. - - Formula: - - index = index of first data which is the highest - ''' - params = (('_evalfunc', max),) - - -class FindFirstIndexLowest(FindFirstIndex): - ''' - Returns the index of the first data that is the lowest in the period - - Note: - Returned indexes look backwards. 0 is the current index and 1 is - the previous bar. - - Formula: - - index = index of first data which is the lowest - ''' - params = (('_evalfunc', min),) - - -class FindLastIndex(OperationN): - ''' - Returns the index of the last data that satisfies equality with the - condition generated by the parameter _evalfunc - - Note: - Returned indexes look backwards. 0 is the current index and 1 is - the previous bar. - - Formula: - - index = last for which data[index] == _evalfunc(data) - ''' - lines = ('index',) - params = (('_evalfunc', None),) - - def func(self, iterable): - m = self.p._evalfunc(iterable) - index = next(i for i, v in enumerate(iterable) if v == m) - # The iterable goes from 0 -> period - 1. If the last element - # which is the current bar is returned and without the -1 then - # period - index = 1 ... and must be zero! - return self.p.period - index - 1 - - -class FindLastIndexHighest(FindLastIndex): - ''' - Returns the index of the last data that is the highest in the period - - Note: - Returned indexes look backwards. 0 is the current index and 1 is - the previous bar. - - Formula: - - index = index of last data which is the highest - ''' - params = (('_evalfunc', max),) - - -class FindLastIndexLowest(FindLastIndex): - ''' - Returns the index of the last data that is the lowest in the period - - Note: - Returned indexes look backwards. 0 is the current index and 1 is - the previous bar. - - Formula: - - index = index of last data which is the lowest - ''' - params = (('_evalfunc', min),) - - -class Accum(Indicator): - ''' - Cummulative sum of the data values - - Formula: - - accum += data - ''' - alias = ('CumSum', 'CumulativeSum',) - lines = ('accum',) - params = (('seed', 0.0),) - - # xxxstart methods use the seed (starting value) and passed data to - # construct the first value keeping the minperiod to 1 since no - # initial look-back value is needed - - def nextstart(self): - self.line[0] = self.p.seed + self.data[0] - - def next(self): - self.line[0] = self.line[-1] + self.data[0] - - def oncestart(self, start, end): - dst = self.line.array - src = self.data.array - prev = self.p.seed - - for i in range(start, end): - dst[i] = prev = prev + src[i] - - def once(self, start, end): - dst = self.line.array - src = self.data.array - prev = dst[start - 1] - - for i in range(start, end): - dst[i] = prev = prev + src[i] - - -class Average(PeriodN): - ''' - Averages a given data arithmetically over a period - - Formula: - - av = data(period) / period - - See also: - - https://en.wikipedia.org/wiki/Arithmetic_mean - ''' - alias = ('ArithmeticMean', 'Mean',) - lines = ('av',) - - def next(self): - self.line[0] = \ - math.fsum(self.data.get(size=self.p.period)) / self.p.period - - def once(self, start, end): - src = self.data.array - dst = self.line.array - period = self.p.period - - for i in range(start, end): - dst[i] = math.fsum(src[i - period + 1:i + 1]) / period - - -class ExponentialSmoothing(Average): - ''' - Averages a given data over a period using exponential smoothing - - A regular ArithmeticMean (Average) is used as the seed value considering - the first period values of data - - Formula: - - av = prev * (1 - alpha) + data * alpha - - See also: - - https://en.wikipedia.org/wiki/Exponential_smoothing - ''' - alias = ('ExpSmoothing',) - params = (('alpha', None),) - - def __init__(self): - self.alpha = self.p.alpha - if self.alpha is None: - self.alpha = 2.0 / (1.0 + self.p.period) # def EMA value - - self.alpha1 = 1.0 - self.alpha - - super(ExponentialSmoothing, self).__init__() - - def nextstart(self): - # Fetch the seed value from the base class calculation - super(ExponentialSmoothing, self).next() - - def next(self): - self.line[0] = self.line[-1] * self.alpha1 + self.data[0] * self.alpha - - def oncestart(self, start, end): - # Fetch the seed value from the base class calculation - super(ExponentialSmoothing, self).once(start, end) - - def once(self, start, end): - darray = self.data.array - larray = self.line.array - alpha = self.alpha - alpha1 = self.alpha1 - - # Seed value from SMA calculated with the call to oncestart - prev = larray[start - 1] - for i in range(start, end): - larray[i] = prev = prev * alpha1 + darray[i] * alpha - - -class ExponentialSmoothingDynamic(ExponentialSmoothing): - ''' - Averages a given data over a period using exponential smoothing - - A regular ArithmeticMean (Average) is used as the seed value considering - the first period values of data - - Note: - - alpha is an array of values which can be calculated dynamically - - Formula: - - av = prev * (1 - alpha) + data * alpha - - See also: - - https://en.wikipedia.org/wiki/Exponential_smoothing - ''' - alias = ('ExpSmoothingDynamic',) - - def __init__(self): - super(ExponentialSmoothingDynamic, self).__init__() - - # Hack: alpha is a "line" and carries a minperiod which is not being - # considered because this indicator makes no line assignment. It has - # therefore to be considered manually - minperioddiff = max(0, self.alpha._minperiod - self.p.period) - self.lines[0].incminperiod(minperioddiff) - - def next(self): - self.line[0] = \ - self.line[-1] * self.alpha1[0] + self.data[0] * self.alpha[0] - - def once(self, start, end): - darray = self.data.array - larray = self.line.array - alpha = self.alpha.array - alpha1 = self.alpha1.array - - # Seed value from SMA calculated with the call to oncestart - prev = larray[start - 1] - for i in range(start, end): - larray[i] = prev = prev * alpha1[i] + darray[i] * alpha[i] - - -class WeightedAverage(PeriodN): - ''' - Calculates the weighted average of the given data over a period - - The default weights (if none are provided) are linear to assigne more - weight to the most recent data - - The result will be multiplied by a given "coef" - - Formula: - - av = coef * sum(mul(data, period), weights) - - See: - - https://en.wikipedia.org/wiki/Weighted_arithmetic_mean - ''' - alias = ('AverageWeighted',) - lines = ('av',) - params = (('coef', 1.0), ('weights', tuple()),) - - def __init__(self): - super(WeightedAverage, self).__init__() - - def next(self): - data = self.data.get(size=self.p.period) - dataweighted = map(operator.mul, data, self.p.weights) - self.line[0] = self.p.coef * math.fsum(dataweighted) - - def once(self, start, end): - darray = self.data.array - larray = self.line.array - period = self.p.period - coef = self.p.coef - weights = self.p.weights - - for i in range(start, end): - data = darray[i - period + 1: i + 1] - larray[i] = coef * math.fsum(map(operator.mul, data, weights)) diff --git a/spaces/LuxOAI/ChatGpt-Web/app/constant.ts b/spaces/LuxOAI/ChatGpt-Web/app/constant.ts deleted file mode 100644 index 0aa27b1ab2bcac688f73e14232072e88dac065f7..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ChatGpt-Web/app/constant.ts +++ /dev/null @@ -1,41 +0,0 @@ -export const OWNER = "Yidadaa"; -export const REPO = "ChatGPT-Next-Web"; -export const REPO_URL = `https://github.com/${OWNER}/${REPO}`; -export const ISSUE_URL = `https://github.com/${OWNER}/${REPO}/issues`; -export const UPDATE_URL = `${REPO_URL}#keep-updated`; -export const FETCH_COMMIT_URL = `https://api.github.com/repos/${OWNER}/${REPO}/commits?per_page=1`; -export const FETCH_TAG_URL = `https://api.github.com/repos/${OWNER}/${REPO}/tags?per_page=1`; -export const RUNTIME_CONFIG_DOM = "danger-runtime-config"; - -export enum Path { - Home = "/", - Chat = "/chat", - Settings = "/settings", - NewChat = "/new-chat", - Masks = "/masks", -} - -export enum SlotID { - AppBody = "app-body", -} - -export enum FileName { - Masks = "masks.json", - Prompts = "prompts.json", -} - -export enum StoreKey { - Chat = "chat-next-web-store", - Access = "access-control", - Config = "app-config", - Mask = "mask-store", - Prompt = "prompt-store", - Update = "chat-update", -} - -export const MAX_SIDEBAR_WIDTH = 500; -export const MIN_SIDEBAR_WIDTH = 230; -export const NARROW_SIDEBAR_WIDTH = 100; - -export const ACCESS_CODE_PREFIX = "ak-"; -export const LAST_INPUT_KEY = "last-input"; \ No newline at end of file diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/base_network.py b/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/base_network.py deleted file mode 100644 index bc33f0e70082bf4be536fe5cf576f40c48800159..0000000000000000000000000000000000000000 --- a/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/base_network.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import torch.nn as nn -from torch.nn import init - - -class BaseNetwork(nn.Module): - def __init__(self): - super(BaseNetwork, self).__init__() - - @staticmethod - def modify_commandline_options(parser, is_train): - return parser - - def print_network(self): - if isinstance(self, list): - self = self[0] - num_params = 0 - for param in self.parameters(): - num_params += param.numel() - print( - "Network [%s] was created. Total number of parameters: %.1f million. " - "To see the architecture, do print(network)." % (type(self).__name__, num_params / 1000000) - ) - - def init_weights(self, init_type="normal", gain=0.02): - def init_func(m): - classname = m.__class__.__name__ - if classname.find("BatchNorm2d") != -1: - if hasattr(m, "weight") and m.weight is not None: - init.normal_(m.weight.data, 1.0, gain) - if hasattr(m, "bias") and m.bias is not None: - init.constant_(m.bias.data, 0.0) - elif hasattr(m, "weight") and (classname.find("Conv") != -1 or classname.find("Linear") != -1): - if init_type == "normal": - init.normal_(m.weight.data, 0.0, gain) - elif init_type == "xavier": - init.xavier_normal_(m.weight.data, gain=gain) - elif init_type == "xavier_uniform": - init.xavier_uniform_(m.weight.data, gain=1.0) - elif init_type == "kaiming": - init.kaiming_normal_(m.weight.data, a=0, mode="fan_in") - elif init_type == "orthogonal": - init.orthogonal_(m.weight.data, gain=gain) - elif init_type == "none": # uses pytorch's default init method - m.reset_parameters() - else: - raise NotImplementedError("initialization method [%s] is not implemented" % init_type) - if hasattr(m, "bias") and m.bias is not None: - init.constant_(m.bias.data, 0.0) - - self.apply(init_func) - - # propagate to children - for m in self.children(): - if hasattr(m, "init_weights"): - m.init_weights(init_type, gain) diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/inference/infer_tool.py b/spaces/MashiroSA/sovits-emu-voice-transform/inference/infer_tool.py deleted file mode 100644 index 5328c549bfcfa789a74e56729219e5607a6612a6..0000000000000000000000000000000000000000 --- a/spaces/MashiroSA/sovits-emu-voice-transform/inference/infer_tool.py +++ /dev/null @@ -1,340 +0,0 @@ -import hashlib -import io -import json -import logging -import os -import time -from pathlib import Path -from inference import slicer - -import librosa -import numpy as np -# import onnxruntime -import parselmouth -import soundfile -import torch -import torchaudio - -import cluster -from hubert import hubert_model -import utils -from models import SynthesizerTrn - -logging.getLogger('matplotlib').setLevel(logging.WARNING) - - -def read_temp(file_name): - if not os.path.exists(file_name): - with open(file_name, "w") as f: - f.write(json.dumps({"info": "temp_dict"})) - return {} - else: - try: - with open(file_name, "r") as f: - data = f.read() - data_dict = json.loads(data) - if os.path.getsize(file_name) > 50 * 1024 * 1024: - f_name = file_name.replace("\\", "/").split("/")[-1] - print(f"clean {f_name}") - for wav_hash in list(data_dict.keys()): - if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600: - del data_dict[wav_hash] - except Exception as e: - print(e) - print(f"{file_name} error,auto rebuild file") - data_dict = {"info": "temp_dict"} - return data_dict - - -def write_temp(file_name, data): - with open(file_name, "w") as f: - f.write(json.dumps(data)) - - -def timeit(func): - def run(*args, **kwargs): - t = time.time() - res = func(*args, **kwargs) - print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t)) - return res - - return run - - -def format_wav(audio_path): - if Path(audio_path).suffix == '.wav': - return - raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None) - soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate) - - -def get_end_file(dir_path, end): - file_lists = [] - for root, dirs, files in os.walk(dir_path): - files = [f for f in files if f[0] != '.'] - dirs[:] = [d for d in dirs if d[0] != '.'] - for f_file in files: - if f_file.endswith(end): - file_lists.append(os.path.join(root, f_file).replace("\\", "/")) - return file_lists - - -def get_md5(content): - return hashlib.new("md5", content).hexdigest() - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - -def pad_array(arr, target_length): - current_length = arr.shape[0] - if current_length >= target_length: - return arr - else: - pad_width = target_length - current_length - pad_left = pad_width // 2 - pad_right = pad_width - pad_left - padded_arr = np.pad(arr, (pad_left, pad_right), 'constant', constant_values=(0, 0)) - return padded_arr - -def split_list_by_n(list_collection, n, pre=0): - for i in range(0, len(list_collection), n): - yield list_collection[i-pre if i-pre>=0 else i: i + n] - - -class F0FilterException(Exception): - pass - -class Svc(object): - def __init__(self, net_g_path, config_path, - device=None, - cluster_model_path="logs/44k/kmeans_10000.pt", - nsf_hifigan_enhance = False - ): - self.net_g_path = net_g_path - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - self.net_g_ms = None - self.hps_ms = utils.get_hparams_from_file(config_path) - self.target_sample = self.hps_ms.data.sampling_rate - self.hop_size = self.hps_ms.data.hop_length - self.spk2id = self.hps_ms.spk - self.nsf_hifigan_enhance = nsf_hifigan_enhance - # 加载hubert - self.hubert_model = utils.get_hubert_model().to(self.dev) - self.load_model() - if os.path.exists(cluster_model_path): - self.cluster_model = cluster.get_cluster_model(cluster_model_path) - if self.nsf_hifigan_enhance: - from modules.enhancer import Enhancer - self.enhancer = Enhancer('nsf-hifigan', 'pretrain/nsf_hifigan/model',device=self.dev) - - def load_model(self): - # 获取模型配置 - self.net_g_ms = SynthesizerTrn( - self.hps_ms.data.filter_length // 2 + 1, - self.hps_ms.train.segment_size // self.hps_ms.data.hop_length, - **self.hps_ms.model) - _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None) - if "half" in self.net_g_path and torch.cuda.is_available(): - _ = self.net_g_ms.half().eval().to(self.dev) - else: - _ = self.net_g_ms.eval().to(self.dev) - - - - def get_unit_f0(self, in_path, tran, cluster_infer_ratio, speaker, f0_filter ,F0_mean_pooling): - - wav, sr = librosa.load(in_path, sr=self.target_sample) - - if F0_mean_pooling == True: - f0, uv = utils.compute_f0_uv_torchcrepe(torch.FloatTensor(wav), sampling_rate=self.target_sample, hop_length=self.hop_size,device=self.dev) - if f0_filter and sum(f0) == 0: - raise F0FilterException("未检测到人声") - f0 = torch.FloatTensor(list(f0)) - uv = torch.FloatTensor(list(uv)) - if F0_mean_pooling == False: - f0 = utils.compute_f0_parselmouth(wav, sampling_rate=self.target_sample, hop_length=self.hop_size) - if f0_filter and sum(f0) == 0: - raise F0FilterException("未检测到人声") - f0, uv = utils.interpolate_f0(f0) - f0 = torch.FloatTensor(f0) - uv = torch.FloatTensor(uv) - - f0 = f0 * 2 ** (tran / 12) - f0 = f0.unsqueeze(0).to(self.dev) - uv = uv.unsqueeze(0).to(self.dev) - - wav16k = librosa.resample(wav, orig_sr=self.target_sample, target_sr=16000) - wav16k = torch.from_numpy(wav16k).to(self.dev) - c = utils.get_hubert_content(self.hubert_model, wav_16k_tensor=wav16k) - c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1]) - - if cluster_infer_ratio !=0: - cluster_c = cluster.get_cluster_center_result(self.cluster_model, c.cpu().numpy().T, speaker).T - cluster_c = torch.FloatTensor(cluster_c).to(self.dev) - c = cluster_infer_ratio * cluster_c + (1 - cluster_infer_ratio) * c - - c = c.unsqueeze(0) - return c, f0, uv - - def infer(self, speaker, tran, raw_path, - cluster_infer_ratio=0, - auto_predict_f0=False, - noice_scale=0.4, - f0_filter=False, - F0_mean_pooling=False, - enhancer_adaptive_key = 0 - ): - - speaker_id = self.spk2id.__dict__.get(speaker) - if not speaker_id and type(speaker) is int: - if len(self.spk2id.__dict__) >= speaker: - speaker_id = speaker - sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0) - c, f0, uv = self.get_unit_f0(raw_path, tran, cluster_infer_ratio, speaker, f0_filter,F0_mean_pooling) - if "half" in self.net_g_path and torch.cuda.is_available(): - c = c.half() - with torch.no_grad(): - start = time.time() - audio = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)[0,0].data.float() - if self.nsf_hifigan_enhance: - audio, _ = self.enhancer.enhance( - audio[None,:], - self.target_sample, - f0[:,:,None], - self.hps_ms.data.hop_length, - adaptive_key = enhancer_adaptive_key) - use_time = time.time() - start - print("vits use time:{}".format(use_time)) - return audio, audio.shape[-1] - - def clear_empty(self): - # 清理显存 - torch.cuda.empty_cache() - - def slice_inference(self, - raw_audio_path, - spk, - tran, - slice_db, - cluster_infer_ratio, - auto_predict_f0, - noice_scale, - pad_seconds=0.5, - clip_seconds=0, - lg_num=0, - lgr_num =0.75, - F0_mean_pooling = False, - enhancer_adaptive_key = 0 - ): - wav_path = raw_audio_path - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - per_size = int(clip_seconds*audio_sr) - lg_size = int(lg_num*audio_sr) - lg_size_r = int(lg_size*lgr_num) - lg_size_c_l = (lg_size-lg_size_r)//2 - lg_size_c_r = lg_size-lg_size_r-lg_size_c_l - lg = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0 - - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - # padd - length = int(np.ceil(len(data) / audio_sr * self.target_sample)) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - audio.extend(list(pad_array(_audio, length))) - continue - if per_size != 0: - datas = split_list_by_n(data, per_size,lg_size) - else: - datas = [data] - for k,dat in enumerate(datas): - per_length = int(np.ceil(len(dat) / audio_sr * self.target_sample)) if clip_seconds!=0 else length - if clip_seconds!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======') - # padd - pad_len = int(audio_sr * pad_seconds) - dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])]) - raw_path = io.BytesIO() - soundfile.write(raw_path, dat, audio_sr, format="wav") - raw_path.seek(0) - out_audio, out_sr = self.infer(spk, tran, raw_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - F0_mean_pooling = F0_mean_pooling, - enhancer_adaptive_key = enhancer_adaptive_key - ) - _audio = out_audio.cpu().numpy() - pad_len = int(self.target_sample * pad_seconds) - _audio = _audio[pad_len:-pad_len] - _audio = pad_array(_audio, per_length) - if lg_size!=0 and k!=0: - lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr_num != 1 else audio[-lg_size:] - lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr_num != 1 else _audio[0:lg_size] - lg_pre = lg1*(1-lg)+lg2*lg - audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr_num != 1 else audio[0:-lg_size] - audio.extend(lg_pre) - _audio = _audio[lg_size_c_l+lg_size_r:] if lgr_num != 1 else _audio[lg_size:] - audio.extend(list(_audio)) - return np.array(audio) - -class RealTimeVC: - def __init__(self): - self.last_chunk = None - self.last_o = None - self.chunk_len = 16000 # 区块长度 - self.pre_len = 3840 # 交叉淡化长度,640的倍数 - - """输入输出都是1维numpy 音频波形数组""" - - def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path, - cluster_infer_ratio=0, - auto_predict_f0=False, - noice_scale=0.4, - f0_filter=False): - - import maad - audio, sr = torchaudio.load(input_wav_path) - audio = audio.cpu().numpy()[0] - temp_wav = io.BytesIO() - if self.last_chunk is None: - input_wav_path.seek(0) - - audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - f0_filter=f0_filter) - - audio = audio.cpu().numpy() - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return audio[-self.chunk_len:] - else: - audio = np.concatenate([self.last_chunk, audio]) - soundfile.write(temp_wav, audio, sr, format="wav") - temp_wav.seek(0) - - audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - f0_filter=f0_filter) - - audio = audio.cpu().numpy() - ret = maad.util.crossfade(self.last_o, audio, self.pre_len) - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return ret[self.chunk_len:2 * self.chunk_len] diff --git a/spaces/MathysL/AutoGPT4/autogpt/commands/file_operations.py b/spaces/MathysL/AutoGPT4/autogpt/commands/file_operations.py deleted file mode 100644 index ad145ec956dd9dafd39e09c2244d001cf5febd2f..0000000000000000000000000000000000000000 --- a/spaces/MathysL/AutoGPT4/autogpt/commands/file_operations.py +++ /dev/null @@ -1,267 +0,0 @@ -"""File operations for AutoGPT""" -from __future__ import annotations - -import os -import os.path -from typing import Generator - -import requests -from colorama import Back, Fore -from requests.adapters import HTTPAdapter, Retry - -from autogpt.spinner import Spinner -from autogpt.utils import readable_file_size -from autogpt.workspace import WORKSPACE_PATH, path_in_workspace - -LOG_FILE = "file_logger.txt" -LOG_FILE_PATH = WORKSPACE_PATH / LOG_FILE - - -def check_duplicate_operation(operation: str, filename: str) -> bool: - """Check if the operation has already been performed on the given file - - Args: - operation (str): The operation to check for - filename (str): The name of the file to check for - - Returns: - bool: True if the operation has already been performed on the file - """ - log_content = read_file(LOG_FILE) - log_entry = f"{operation}: {filename}\n" - return log_entry in log_content - - -def log_operation(operation: str, filename: str) -> None: - """Log the file operation to the file_logger.txt - - Args: - operation (str): The operation to log - filename (str): The name of the file the operation was performed on - """ - log_entry = f"{operation}: {filename}\n" - - # Create the log file if it doesn't exist - if not os.path.exists(LOG_FILE_PATH): - with open(LOG_FILE_PATH, "w", encoding="utf-8") as f: - f.write("File Operation Logger ") - - append_to_file(LOG_FILE, log_entry, shouldLog=False) - - -def split_file( - content: str, max_length: int = 4000, overlap: int = 0 -) -> Generator[str, None, None]: - """ - Split text into chunks of a specified maximum length with a specified overlap - between chunks. - - :param content: The input text to be split into chunks - :param max_length: The maximum length of each chunk, - default is 4000 (about 1k token) - :param overlap: The number of overlapping characters between chunks, - default is no overlap - :return: A generator yielding chunks of text - """ - start = 0 - content_length = len(content) - - while start < content_length: - end = start + max_length - if end + overlap < content_length: - chunk = content[start : end + overlap - 1] - else: - chunk = content[start:content_length] - - # Account for the case where the last chunk is shorter than the overlap, so it has already been consumed - if len(chunk) <= overlap: - break - - yield chunk - start += max_length - overlap - - -def read_file(filename: str) -> str: - """Read a file and return the contents - - Args: - filename (str): The name of the file to read - - Returns: - str: The contents of the file - """ - try: - filepath = path_in_workspace(filename) - with open(filepath, "r", encoding="utf-8") as f: - content = f.read() - return content - except Exception as e: - return f"Error: {str(e)}" - - -def ingest_file( - filename: str, memory, max_length: int = 4000, overlap: int = 200 -) -> None: - """ - Ingest a file by reading its content, splitting it into chunks with a specified - maximum length and overlap, and adding the chunks to the memory storage. - - :param filename: The name of the file to ingest - :param memory: An object with an add() method to store the chunks in memory - :param max_length: The maximum length of each chunk, default is 4000 - :param overlap: The number of overlapping characters between chunks, default is 200 - """ - try: - print(f"Working with file {filename}") - content = read_file(filename) - content_length = len(content) - print(f"File length: {content_length} characters") - - chunks = list(split_file(content, max_length=max_length, overlap=overlap)) - - num_chunks = len(chunks) - for i, chunk in enumerate(chunks): - print(f"Ingesting chunk {i + 1} / {num_chunks} into memory") - memory_to_add = ( - f"Filename: {filename}\n" f"Content part#{i + 1}/{num_chunks}: {chunk}" - ) - - memory.add(memory_to_add) - - print(f"Done ingesting {num_chunks} chunks from {filename}.") - except Exception as e: - print(f"Error while ingesting file '{filename}': {str(e)}") - - -def write_to_file(filename: str, text: str) -> str: - """Write text to a file - - Args: - filename (str): The name of the file to write to - text (str): The text to write to the file - - Returns: - str: A message indicating success or failure - """ - if check_duplicate_operation("write", filename): - return "Error: File has already been updated." - try: - filepath = path_in_workspace(filename) - directory = os.path.dirname(filepath) - if not os.path.exists(directory): - os.makedirs(directory) - with open(filepath, "w", encoding="utf-8") as f: - f.write(text) - log_operation("write", filename) - return "File written to successfully." - except Exception as e: - return f"Error: {str(e)}" - - -def append_to_file(filename: str, text: str, shouldLog: bool = True) -> str: - """Append text to a file - - Args: - filename (str): The name of the file to append to - text (str): The text to append to the file - - Returns: - str: A message indicating success or failure - """ - try: - filepath = path_in_workspace(filename) - with open(filepath, "a") as f: - f.write(text) - - if shouldLog: - log_operation("append", filename) - - return "Text appended successfully." - except Exception as e: - return f"Error: {str(e)}" - - -def delete_file(filename: str) -> str: - """Delete a file - - Args: - filename (str): The name of the file to delete - - Returns: - str: A message indicating success or failure - """ - if check_duplicate_operation("delete", filename): - return "Error: File has already been deleted." - try: - filepath = path_in_workspace(filename) - os.remove(filepath) - log_operation("delete", filename) - return "File deleted successfully." - except Exception as e: - return f"Error: {str(e)}" - - -def search_files(directory: str) -> list[str]: - """Search for files in a directory - - Args: - directory (str): The directory to search in - - Returns: - list[str]: A list of files found in the directory - """ - found_files = [] - - if directory in {"", "/"}: - search_directory = WORKSPACE_PATH - else: - search_directory = path_in_workspace(directory) - - for root, _, files in os.walk(search_directory): - for file in files: - if file.startswith("."): - continue - relative_path = os.path.relpath(os.path.join(root, file), WORKSPACE_PATH) - found_files.append(relative_path) - - return found_files - - -def download_file(url, filename): - """Downloads a file - Args: - url (str): URL of the file to download - filename (str): Filename to save the file as - """ - safe_filename = path_in_workspace(filename) - try: - message = f"{Fore.YELLOW}Downloading file from {Back.LIGHTBLUE_EX}{url}{Back.RESET}{Fore.RESET}" - with Spinner(message) as spinner: - session = requests.Session() - retry = Retry(total=3, backoff_factor=1, status_forcelist=[502, 503, 504]) - adapter = HTTPAdapter(max_retries=retry) - session.mount("http://", adapter) - session.mount("https://", adapter) - - total_size = 0 - downloaded_size = 0 - - with session.get(url, allow_redirects=True, stream=True) as r: - r.raise_for_status() - total_size = int(r.headers.get("Content-Length", 0)) - downloaded_size = 0 - - with open(safe_filename, "wb") as f: - for chunk in r.iter_content(chunk_size=8192): - f.write(chunk) - downloaded_size += len(chunk) - - # Update the progress message - progress = f"{readable_file_size(downloaded_size)} / {readable_file_size(total_size)}" - spinner.update_message(f"{message} {progress}") - - return f'Successfully downloaded and locally stored file: "{filename}"! (Size: {readable_file_size(total_size)})' - except requests.HTTPError as e: - return f"Got an HTTP Error whilst trying to download file: {e}" - except Exception as e: - return "Error: " + str(e) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/utils/logger.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/utils/logger.py deleted file mode 100644 index 4149d9eda3dfef07490352d22ac40c42460315e4..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/utils/logger.py +++ /dev/null @@ -1,27 +0,0 @@ -import logging - -from annotator.uniformer.mmcv.utils import get_logger - - -def get_root_logger(log_file=None, log_level=logging.INFO): - """Get the root logger. - - The logger will be initialized if it has not been initialized. By default a - StreamHandler will be added. If `log_file` is specified, a FileHandler will - also be added. The name of the root logger is the top-level package name, - e.g., "mmseg". - - Args: - log_file (str | None): The log filename. If specified, a FileHandler - will be added to the root logger. - log_level (int): The root logger level. Note that only the process of - rank 0 is affected, while other processes will set the level to - "Error" and be silent most of the time. - - Returns: - logging.Logger: The root logger. - """ - - logger = get_logger(name='mmseg', log_file=log_file, log_level=log_level) - - return logger diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textdet/imgur_converter.py b/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textdet/imgur_converter.py deleted file mode 100644 index f6c19cd33cdf27bc085563992a126aa02028c43e..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textdet/imgur_converter.py +++ /dev/null @@ -1,152 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import math -import os.path as osp - -import mmcv -import mmengine -import numpy as np - -from mmocr.utils import dump_ocr_data - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Generate training, validation and test set of IMGUR ') - parser.add_argument('root_path', help='Root dir path of IMGUR') - args = parser.parse_args() - - return args - - -def collect_imgur_info(root_path, annotation_filename, print_every=1000): - - annotation_path = osp.join(root_path, 'annotations', annotation_filename) - if not osp.exists(annotation_path): - raise Exception( - f'{annotation_path} not exists, please check and try again.') - - annotation = mmengine.load(annotation_path) - images = annotation['index_to_ann_map'].keys() - img_infos = [] - for i, img_name in enumerate(images): - if i >= 0 and i % print_every == 0: - print(f'{i}/{len(images)}') - - img_path = osp.join(root_path, 'imgs', img_name + '.jpg') - - # Skip not exist images - if not osp.exists(img_path): - continue - - img = mmcv.imread(img_path, 'unchanged') - - # Skip broken images - if img is None: - continue - - img_info = dict( - file_name=img_name + '.jpg', - height=img.shape[0], - width=img.shape[1]) - - anno_info = [] - for ann_id in annotation['index_to_ann_map'][img_name]: - ann = annotation['ann_id'][ann_id] - - # The original annotation is oriented rects [x, y, w, h, a] - box = np.fromstring( - ann['bounding_box'][1:-2], sep=',', dtype=float) - quadrilateral = convert_oriented_box(box) - - xs, ys = quadrilateral[::2], quadrilateral[1::2] - x = max(0, math.floor(min(xs))) - y = max(0, math.floor(min(ys))) - w = math.floor(max(xs)) - x - h = math.floor(max(ys)) - y - bbox = [x, y, w, h] - segmentation = quadrilateral - - anno = dict( - iscrowd=0, - category_id=1, - bbox=bbox, - area=w * h, - segmentation=[segmentation]) - anno_info.append(anno) - img_info.update(anno_info=anno_info) - img_infos.append(img_info) - - return img_infos - - -def convert_oriented_box(box): - - x_ctr, y_ctr, width, height, angle = box[:5] - angle = -angle * math.pi / 180 - - tl_x, tl_y, br_x, br_y = -width / 2, -height / 2, width / 2, height / 2 - rect = np.array([[tl_x, br_x, br_x, tl_x], [tl_y, tl_y, br_y, br_y]]) - R = np.array([[np.cos(angle), -np.sin(angle)], - [np.sin(angle), np.cos(angle)]]) - poly = R.dot(rect) - x0, x1, x2, x3 = poly[0, :4] + x_ctr - y0, y1, y2, y3 = poly[1, :4] + y_ctr - poly = np.array([x0, y0, x1, y1, x2, y2, x3, y3], dtype=np.float32) - poly = get_best_begin_point_single(poly) - - return poly.tolist() - - -def get_best_begin_point_single(coordinate): - - x1, y1, x2, y2, x3, y3, x4, y4 = coordinate - xmin = min(x1, x2, x3, x4) - ymin = min(y1, y2, y3, y4) - xmax = max(x1, x2, x3, x4) - ymax = max(y1, y2, y3, y4) - combine = [[[x1, y1], [x2, y2], [x3, y3], [x4, y4]], - [[x2, y2], [x3, y3], [x4, y4], [x1, y1]], - [[x3, y3], [x4, y4], [x1, y1], [x2, y2]], - [[x4, y4], [x1, y1], [x2, y2], [x3, y3]]] - dst_coordinate = [[xmin, ymin], [xmax, ymin], [xmax, ymax], [xmin, ymax]] - force = 100000000.0 - force_flag = 0 - for i in range(4): - temp_force = cal_line_length(combine[i][0], dst_coordinate[0]) \ - + cal_line_length(combine[i][1], dst_coordinate[1]) \ - + cal_line_length(combine[i][2], dst_coordinate[2]) \ - + cal_line_length(combine[i][3], dst_coordinate[3]) - if temp_force < force: - force = temp_force - force_flag = i - if force_flag != 0: - pass - - return np.array(combine[force_flag]).reshape(8) - - -def cal_line_length(point1, point2): - - return math.sqrt( - math.pow(point1[0] - point2[0], 2) + - math.pow(point1[1] - point2[1], 2)) - - -def main(): - args = parse_args() - root_path = args.root_path - - for split in ['train', 'val', 'test']: - print(f'Processing {split} set...') - with mmengine.Timer( - print_tmpl='It takes {}s to convert IMGUR annotation'): - anno_infos = collect_imgur_info( - root_path, f'imgur5k_annotations_{split}.json') - dump_ocr_data(anno_infos, - osp.join(root_path, f'instances_{split}.json'), - 'textdet') - - -if __name__ == '__main__': - main() diff --git a/spaces/Msp/Document_Classification_DIT/app.py b/spaces/Msp/Document_Classification_DIT/app.py deleted file mode 100644 index 1deccb197076aedb2a639494aafa305053d5349b..0000000000000000000000000000000000000000 --- a/spaces/Msp/Document_Classification_DIT/app.py +++ /dev/null @@ -1,16 +0,0 @@ -import gradio as gr -from transformers import pipeline - -title = "Document Image Transformer" -description = "Gradio Demo for DiT, the Document Image Transformer pre-trained on IIT-CDIP, a dataset that includes 42 million document images and fine-tuned on RVL-CDIP, a dataset consisting of 400,000 grayscale images in 16 classes, with 25,000 images per class. To use it, simply add your image, or click one of the examples to load them. Read more at the links below." -article = "

Huggingface Model

" - -pipe = pipeline(task="image-classification", - model="microsoft/dit-base-finetuned-rvlcdip") -gr.Interface.from_pipeline(pipe, - title=title, - description=description, - examples=[ 'handwritten.jpeg', 'resume.jpeg','form.jpeg'], - article=article, - enable_queue=True, - ).launch() \ No newline at end of file diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/app.py b/spaces/NAACL2022/CLIP-Caption-Reward/app.py deleted file mode 100644 index 90115df2c48b98dc81501498adc463b1ed3e2229..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/CLIP-Caption-Reward/app.py +++ /dev/null @@ -1,194 +0,0 @@ -import torch -import torch.nn as nn - -import numpy as np - -import json - -import captioning.utils.opts as opts -import captioning.models as models -import captioning.utils.misc as utils - -import pytorch_lightning as pl - -import gradio as gr - - -# Checkpoint class -class ModelCheckpoint(pl.callbacks.ModelCheckpoint): - - def on_keyboard_interrupt(self, trainer, pl_module): - # Save model when keyboard interrupt - filepath = os.path.join(self.dirpath, self.prefix + 'interrupt.ckpt') - self._save_model(filepath) - -device = 'cpu' #@param ["cuda", "cpu"] {allow-input: true} - -reward = 'clips_grammar' #@param ["mle", "cider", "clips", "cider_clips", "clips_grammar"] {allow-input: true} - -if reward == 'mle': - cfg = f'./configs/phase1/clipRN50_{reward}.yml' -else: - cfg = f'./configs/phase2/clipRN50_{reward}.yml' - -print("Loading cfg from", cfg) - -opt = opts.parse_opt(parse=False, cfg=cfg) - -import gdown - -if reward == "mle": - url = "https://drive.google.com/drive/folders/1hfHWDn5iXsdjB63E5zdZBAoRLWHQC3LD" -elif reward == "cider": - url = "https://drive.google.com/drive/folders/1MnSmCd8HFnBvQq_4K-q4vsVkzEw0OIOs" -elif reward == "clips": - url = "https://drive.google.com/drive/folders/1toceycN-qilHsbYjKalBLtHJck1acQVe" -elif reward == "cider_clips": - url = "https://drive.google.com/drive/folders/1toceycN-qilHsbYjKalBLtHJck1acQVe" -elif reward == "clips_grammar": - url = "https://drive.google.com/drive/folders/1nSX9aS7pPK4-OTHYtsUD_uEkwIQVIV7W" -gdown.download_folder(url, quiet=True, use_cookies=False, output="save/") - -url = "https://drive.google.com/uc?id=1HNRE1MYO9wxmtMHLC8zURraoNFu157Dp" -gdown.download(url, quiet=True, use_cookies=False, output="data/") - -dict_json = json.load(open('./data/cocotalk.json')) -print(dict_json.keys()) - -ix_to_word = dict_json['ix_to_word'] -vocab_size = len(ix_to_word) -print('vocab size:', vocab_size) - -seq_length = 1 - -opt.vocab_size = vocab_size -opt.seq_length = seq_length - -opt.batch_size = 1 -opt.vocab = ix_to_word -# opt.use_grammar = False - -model = models.setup(opt) -del opt.vocab - -ckpt_path = opt.checkpoint_path + '-last.ckpt' - -print("Loading checkpoint from", ckpt_path) -raw_state_dict = torch.load( - ckpt_path, - map_location=device) - -strict = True - -state_dict = raw_state_dict['state_dict'] - -if '_vocab' in state_dict: - model.vocab = utils.deserialize(state_dict['_vocab']) - del state_dict['_vocab'] -elif strict: - raise KeyError -if '_opt' in state_dict: - saved_model_opt = utils.deserialize(state_dict['_opt']) - del state_dict['_opt'] - # Make sure the saved opt is compatible with the curren topt - need_be_same = ["caption_model", - "rnn_type", "rnn_size", "num_layers"] - for checkme in need_be_same: - if getattr(saved_model_opt, checkme) in ['updown', 'topdown'] and \ - getattr(opt, checkme) in ['updown', 'topdown']: - continue - assert getattr(saved_model_opt, checkme) == getattr( - opt, checkme), "Command line argument and saved model disagree on '%s' " % checkme -elif strict: - raise KeyError -res = model.load_state_dict(state_dict, strict) -print(res) - -model = model.to(device) -model.eval(); - -import clip -from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize -from PIL import Image -from timm.models.vision_transformer import resize_pos_embed - -clip_model, clip_transform = clip.load("RN50", jit=False, device=device) - -preprocess = Compose([ - Resize((448, 448), interpolation=Image.BICUBIC), - CenterCrop((448, 448)), - ToTensor() -]) - -image_mean = torch.Tensor([0.48145466, 0.4578275, 0.40821073]).to(device).reshape(3, 1, 1) -image_std = torch.Tensor([0.26862954, 0.26130258, 0.27577711]).to(device).reshape(3, 1, 1) - -num_patches = 196 #600 * 1000 // 32 // 32 -pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, clip_model.visual.attnpool.positional_embedding.shape[-1], device=device),) -pos_embed.weight = resize_pos_embed(clip_model.visual.attnpool.positional_embedding.unsqueeze(0), pos_embed) -clip_model.visual.attnpool.positional_embedding = pos_embed - -def inference(img): - - with torch.no_grad(): - image = preprocess(img) - image = torch.tensor(np.stack([image])).to(device) - image -= image_mean - image /= image_std - - tmp_att, tmp_fc = clip_model.encode_image(image) - tmp_att = tmp_att[0].permute(1, 2, 0) - tmp_fc = tmp_fc[0] - - att_feat = tmp_att - fc_feat = tmp_fc - - - # Inference configurations - eval_kwargs = {} - eval_kwargs.update(vars(opt)) - - verbose = eval_kwargs.get('verbose', True) - verbose_beam = eval_kwargs.get('verbose_beam', 0) - verbose_loss = eval_kwargs.get('verbose_loss', 1) - - # dataset = eval_kwargs.get('dataset', 'coco') - beam_size = eval_kwargs.get('beam_size', 1) - sample_n = eval_kwargs.get('sample_n', 1) - remove_bad_endings = eval_kwargs.get('remove_bad_endings', 0) - - with torch.no_grad(): - fc_feats = torch.zeros((1,0)).to(device) - att_feats = att_feat.view(1, 196, 2048).float().to(device) - att_masks = None - - # forward the model to also get generated samples for each image - # Only leave one feature for each image, in case duplicate sample - tmp_eval_kwargs = eval_kwargs.copy() - tmp_eval_kwargs.update({'sample_n': 1}) - seq, seq_logprobs = model( - fc_feats, att_feats, att_masks, opt=tmp_eval_kwargs, mode='sample') - seq = seq.data - - sents = utils.decode_sequence(model.vocab, seq) - - return sents[0] - -demo = gr.Blocks() - -with demo: - gr.Markdown( - """ - # Gradio Demo for [j-min/CLIP-Caption-Reward](https://github.com/j-min/CLIP-Caption-Reward) - """) - inp = gr.Image(type="pil") - out = gr.Textbox() - - image_button = gr.Button("Run") - image_button.click(fn=inference, - inputs=inp, - outputs=out, - api_name="clip_caption") - - -demo.launch() \ No newline at end of file diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/models/bert_pretrainer.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/models/bert_pretrainer.py deleted file mode 100644 index bce33747f03af723927fba138ddec55160262449..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/models/bert_pretrainer.py +++ /dev/null @@ -1,231 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Trainer network for BERT-style models.""" -# pylint: disable=g-classes-have-attributes -from __future__ import absolute_import -from __future__ import division -# from __future__ import google_type_annotations -from __future__ import print_function - -import copy -from typing import List, Optional - -import gin -import tensorflow as tf - -from official.nlp.modeling import layers -from official.nlp.modeling import networks - - -@tf.keras.utils.register_keras_serializable(package='Text') -class BertPretrainer(tf.keras.Model): - """BERT network training model. - - This is an implementation of the network structure surrounding a transformer - encoder as described in "BERT: Pre-training of Deep Bidirectional Transformers - for Language Understanding" (https://arxiv.org/abs/1810.04805). - - The BertPretrainer allows a user to pass in a transformer stack, and - instantiates the masked language model and classification networks that are - used to create the training objectives. - - Arguments: - network: A transformer network. This network should output a sequence output - and a classification output. - num_classes: Number of classes to predict from the classification network. - num_token_predictions: Number of tokens to predict from the masked LM. - embedding_table: Embedding table of a network. If None, the - "network.get_embedding_table()" is used. - activation: The activation (if any) to use in the masked LM network. If - None, no activation will be used. - initializer: The initializer (if any) to use in the masked LM and - classification networks. Defaults to a Glorot uniform initializer. - output: The output style for this network. Can be either 'logits' or - 'predictions'. - """ - - def __init__(self, - network, - num_classes, - num_token_predictions, - embedding_table=None, - activation=None, - initializer='glorot_uniform', - output='logits', - **kwargs): - self._self_setattr_tracking = False - self._config = { - 'network': network, - 'num_classes': num_classes, - 'num_token_predictions': num_token_predictions, - 'activation': activation, - 'initializer': initializer, - 'output': output, - } - self.encoder = network - # We want to use the inputs of the passed network as the inputs to this - # Model. To do this, we need to keep a copy of the network inputs for use - # when we construct the Model object at the end of init. (We keep a copy - # because we'll be adding another tensor to the copy later.) - network_inputs = self.encoder.inputs - inputs = copy.copy(network_inputs) - - # Because we have a copy of inputs to create this Model object, we can - # invoke the Network object with its own input tensors to start the Model. - # Note that, because of how deferred construction happens, we can't use - # the copy of the list here - by the time the network is invoked, the list - # object contains the additional input added below. - sequence_output, cls_output = self.encoder(network_inputs) - - # The encoder network may get outputs from all layers. - if isinstance(sequence_output, list): - sequence_output = sequence_output[-1] - if isinstance(cls_output, list): - cls_output = cls_output[-1] - sequence_output_length = sequence_output.shape.as_list()[1] - if sequence_output_length < num_token_predictions: - raise ValueError( - "The passed network's output length is %s, which is less than the " - 'requested num_token_predictions %s.' % - (sequence_output_length, num_token_predictions)) - - masked_lm_positions = tf.keras.layers.Input( - shape=(num_token_predictions,), - name='masked_lm_positions', - dtype=tf.int32) - inputs.append(masked_lm_positions) - - if embedding_table is None: - embedding_table = self.encoder.get_embedding_table() - self.masked_lm = layers.MaskedLM( - embedding_table=embedding_table, - activation=activation, - initializer=initializer, - output=output, - name='cls/predictions') - lm_outputs = self.masked_lm( - sequence_output, masked_positions=masked_lm_positions) - - self.classification = networks.Classification( - input_width=cls_output.shape[-1], - num_classes=num_classes, - initializer=initializer, - output=output, - name='classification') - sentence_outputs = self.classification(cls_output) - - super(BertPretrainer, self).__init__( - inputs=inputs, - outputs=dict(masked_lm=lm_outputs, classification=sentence_outputs), - **kwargs) - - def get_config(self): - return self._config - - @classmethod - def from_config(cls, config, custom_objects=None): - return cls(**config) - - -# TODO(hongkuny): Migrate to BertPretrainerV2 for all usages. -@tf.keras.utils.register_keras_serializable(package='Text') -@gin.configurable -class BertPretrainerV2(tf.keras.Model): - """BERT pretraining model V2. - - (Experimental). - Adds the masked language model head and optional classification heads upon the - transformer encoder. When num_masked_tokens == 0, there won't be MaskedLM - head. - - Arguments: - num_masked_tokens: Number of tokens to predict from the masked LM. - encoder_network: A transformer network. This network should output a - sequence output and a classification output. - mlm_activation: The activation (if any) to use in the masked LM network. If - None, no activation will be used. - mlm_initializer: The initializer (if any) to use in the masked LM. Default - to a Glorot uniform initializer. - classification_heads: A list of optional head layers to transform on encoder - sequence outputs. - name: The name of the model. - Inputs: Inputs defined by the encoder network, plus `masked_lm_positions` as a - dictionary. - Outputs: A dictionary of `lm_output` and classification head outputs keyed by - head names. - """ - - def __init__( - self, - num_masked_tokens: int, - encoder_network: tf.keras.Model, - mlm_activation=None, - mlm_initializer='glorot_uniform', - classification_heads: Optional[List[tf.keras.layers.Layer]] = None, - name: str = 'bert', - **kwargs): - self._self_setattr_tracking = False - self._config = { - 'encoder_network': encoder_network, - 'num_masked_tokens': num_masked_tokens, - 'mlm_initializer': mlm_initializer, - 'classification_heads': classification_heads, - 'name': name, - } - - self.encoder_network = encoder_network - inputs = copy.copy(self.encoder_network.inputs) - sequence_output, _ = self.encoder_network(inputs) - - self.classification_heads = classification_heads or [] - if len(set([cls.name for cls in self.classification_heads])) != len( - self.classification_heads): - raise ValueError('Classification heads should have unique names.') - - outputs = dict() - if num_masked_tokens > 0: - self.masked_lm = layers.MaskedLM( - embedding_table=self.encoder_network.get_embedding_table(), - activation=mlm_activation, - initializer=mlm_initializer, - name='cls/predictions') - masked_lm_positions = tf.keras.layers.Input( - shape=(num_masked_tokens,), - name='masked_lm_positions', - dtype=tf.int32) - inputs.append(masked_lm_positions) - outputs['lm_output'] = self.masked_lm( - sequence_output, masked_positions=masked_lm_positions) - for cls_head in self.classification_heads: - outputs[cls_head.name] = cls_head(sequence_output) - - super(BertPretrainerV2, self).__init__( - inputs=inputs, outputs=outputs, name=name, **kwargs) - - @property - def checkpoint_items(self): - """Returns a dictionary of items to be additionally checkpointed.""" - items = dict(encoder=self.encoder_network) - for head in self.classification_heads: - for key, item in head.checkpoint_items.items(): - items['.'.join([head.name, key])] = item - return items - - def get_config(self): - return self._config - - @classmethod - def from_config(cls, config, custom_objects=None): - return cls(**config) diff --git a/spaces/Nandhusnm/testing/README.md b/spaces/Nandhusnm/testing/README.md deleted file mode 100644 index b4c1cdb3062397f56baa880782c8f2cdb67ca7fe..0000000000000000000000000000000000000000 --- a/spaces/Nandhusnm/testing/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Testing -emoji: 🐨 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/simultaneous_translation/modules/fixed_pre_decision.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/simultaneous_translation/modules/fixed_pre_decision.py deleted file mode 100644 index 3991414aed3800f301e4097e819d3064bb549c37..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/simultaneous_translation/modules/fixed_pre_decision.py +++ /dev/null @@ -1,190 +0,0 @@ -from functools import partial - -import torch -from torch import Tensor -import math -import torch.nn.functional as F - -from . import register_monotonic_attention -from .monotonic_multihead_attention import ( - MonotonicAttention, - MonotonicInfiniteLookbackAttention, - WaitKAttention -) -from typing import Dict, Optional - - -def fixed_pooling_monotonic_attention(monotonic_attention): - def create_model(monotonic_attention, klass): - class FixedStrideMonotonicAttention(monotonic_attention): - def __init__(self, args): - self.waitk_lagging = 0 - self.num_heads = 0 - self.noise_mean = 0.0 - self.noise_var = 0.0 - super().__init__(args) - self.pre_decision_type = args.fixed_pre_decision_type - self.pre_decision_ratio = args.fixed_pre_decision_ratio - self.pre_decision_pad_threshold = args.fixed_pre_decision_pad_threshold - assert self.pre_decision_ratio > 1 - - if args.fixed_pre_decision_type == "average": - self.pooling_layer = torch.nn.AvgPool1d( - kernel_size=self.pre_decision_ratio, - stride=self.pre_decision_ratio, - ceil_mode=True, - ) - elif args.fixed_pre_decision_type == "last": - - def last(key): - if key.size(2) < self.pre_decision_ratio: - return key - else: - k = key[ - :, - :, - self.pre_decision_ratio - 1:: self.pre_decision_ratio, - ].contiguous() - if key.size(-1) % self.pre_decision_ratio != 0: - k = torch.cat([k, key[:, :, -1:]], dim=-1).contiguous() - return k - - self.pooling_layer = last - else: - raise NotImplementedError - - @staticmethod - def add_args(parser): - super( - FixedStrideMonotonicAttention, FixedStrideMonotonicAttention - ).add_args(parser) - parser.add_argument( - "--fixed-pre-decision-ratio", - type=int, - required=True, - help=( - "Ratio for the fixed pre-decision," - "indicating how many encoder steps will start" - "simultaneous decision making process." - ), - ) - parser.add_argument( - "--fixed-pre-decision-type", - default="average", - choices=["average", "last"], - help="Pooling type", - ) - parser.add_argument( - "--fixed-pre-decision-pad-threshold", - type=float, - default=0.3, - help="If a part of the sequence has pad" - ",the threshold the pooled part is a pad.", - ) - - def insert_zeros(self, x): - bsz_num_heads, tgt_len, src_len = x.size() - stride = self.pre_decision_ratio - weight = F.pad(torch.ones(1, 1, 1).to(x), (stride - 1, 0)) - x_upsample = F.conv_transpose1d( - x.view(-1, src_len).unsqueeze(1), - weight, - stride=stride, - padding=0, - ) - return x_upsample.squeeze(1).view(bsz_num_heads, tgt_len, -1) - - def p_choose( - self, - query: Optional[Tensor], - key: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - ): - assert key is not None - assert query is not None - src_len = key.size(0) - tgt_len = query.size(0) - batch_size = query.size(1) - - key_pool = self.pooling_layer(key.transpose(0, 2)).transpose(0, 2) - - if key_padding_mask is not None: - key_padding_mask_pool = ( - self.pooling_layer(key_padding_mask.unsqueeze(0).float()) - .squeeze(0) - .gt(self.pre_decision_pad_threshold) - ) - # Make sure at least one element is not pad - key_padding_mask_pool[:, 0] = 0 - else: - key_padding_mask_pool = None - - if incremental_state is not None: - # The floor instead of ceil is used for inference - # But make sure the length key_pool at least 1 - if ( - max(1, math.floor(key.size(0) / self.pre_decision_ratio)) - ) < key_pool.size(0): - key_pool = key_pool[:-1] - if key_padding_mask_pool is not None: - key_padding_mask_pool = key_padding_mask_pool[:-1] - - p_choose_pooled = self.p_choose_from_qk( - query, - key_pool, - key_padding_mask_pool, - incremental_state=incremental_state, - ) - - # Upsample, interpolate zeros - p_choose = self.insert_zeros(p_choose_pooled) - - if p_choose.size(-1) < src_len: - # Append zeros if the upsampled p_choose is shorter than src_len - p_choose = torch.cat( - [ - p_choose, - torch.zeros( - p_choose.size(0), - tgt_len, - src_len - p_choose.size(-1) - ).to(p_choose) - ], - dim=2 - ) - else: - # can be larger than src_len because we used ceil before - p_choose = p_choose[:, :, :src_len] - p_choose[:, :, -1] = p_choose_pooled[:, :, -1] - - assert list(p_choose.size()) == [ - batch_size * self.num_heads, - tgt_len, - src_len, - ] - - return p_choose - - FixedStrideMonotonicAttention.__name__ = klass.__name__ - return FixedStrideMonotonicAttention - - return partial(create_model, monotonic_attention) - - -@register_monotonic_attention("waitk_fixed_pre_decision") -@fixed_pooling_monotonic_attention(WaitKAttention) -class WaitKAttentionFixedStride: - pass - - -@register_monotonic_attention("hard_aligned_fixed_pre_decision") -@fixed_pooling_monotonic_attention(MonotonicAttention) -class MonotonicAttentionFixedStride: - pass - - -@register_monotonic_attention("infinite_lookback_fixed_pre_decision") -@fixed_pooling_monotonic_attention(MonotonicInfiniteLookbackAttention) -class MonotonicInfiniteLookbackAttentionFixedStride: - pass diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/model.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/model.py deleted file mode 100644 index ccf132b150a7cc1c125c1190b5fd8f43edaae685..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/model.py +++ /dev/null @@ -1,669 +0,0 @@ -from math import sqrt -import torch -import torch.distributions as distr -from torch.autograd import Variable -from torch import nn -from torch.nn import functional as F -from .layers import ConvNorm, LinearNorm, GlobalAvgPool -from .utils import to_gpu, get_mask_from_lengths - - -class LocationLayer(nn.Module): - def __init__(self, attention_n_filters, attention_kernel_size, - attention_dim): - super(LocationLayer, self).__init__() - padding = int((attention_kernel_size - 1) / 2) - self.location_conv = ConvNorm(2, attention_n_filters, - kernel_size=attention_kernel_size, - padding=padding, bias=False, stride=1, - dilation=1) - self.location_dense = LinearNorm(attention_n_filters, attention_dim, - bias=False, w_init_gain='tanh') - - def forward(self, attention_weights_cat): - processed_attention = self.location_conv(attention_weights_cat) - processed_attention = processed_attention.transpose(1, 2) - processed_attention = self.location_dense(processed_attention) - return processed_attention - - -class Attention(nn.Module): - def __init__(self, attention_rnn_dim, embedding_dim, attention_dim, - attention_location_n_filters, attention_location_kernel_size): - super(Attention, self).__init__() - self.query_layer = LinearNorm(attention_rnn_dim, attention_dim, - bias=False, w_init_gain='tanh') - self.memory_layer = LinearNorm(embedding_dim, attention_dim, bias=False, - w_init_gain='tanh') - self.v = LinearNorm(attention_dim, 1, bias=False) - self.location_layer = LocationLayer(attention_location_n_filters, - attention_location_kernel_size, - attention_dim) - self.score_mask_value = -float("inf") - - def get_alignment_energies(self, query, processed_memory, - attention_weights_cat): - """ - PARAMS - ------ - query: decoder output (batch, n_mel_channels * n_frames_per_step) - processed_memory: processed encoder outputs (B, T_in, attention_dim) - attention_weights_cat: cumulative and prev. att weights (B, 2, max_time) - - RETURNS - ------- - alignment (batch, max_time) - """ - - processed_query = self.query_layer(query.unsqueeze(1)) - processed_attention_weights = self.location_layer(attention_weights_cat) - energies = self.v(torch.tanh( - processed_query + processed_attention_weights + processed_memory)) - - energies = energies.squeeze(-1) - return energies - - def forward(self, attention_hidden_state, memory, processed_memory, - attention_weights_cat, mask): - """ - PARAMS - ------ - attention_hidden_state: attention rnn last output - memory: encoder outputs - processed_memory: processed encoder outputs - attention_weights_cat: previous and cummulative attention weights - mask: binary mask for padded data - """ - alignment = self.get_alignment_energies( - attention_hidden_state, processed_memory, attention_weights_cat) - - if mask is not None: - alignment.data.masked_fill_(mask, self.score_mask_value) - - attention_weights = F.softmax(alignment, dim=1) - attention_context = torch.bmm(attention_weights.unsqueeze(1), memory) - attention_context = attention_context.squeeze(1) - - return attention_context, attention_weights - - -class Prenet(nn.Module): - def __init__(self, in_dim, sizes): - super(Prenet, self).__init__() - in_sizes = [in_dim] + sizes[:-1] - self.layers = nn.ModuleList( - [LinearNorm(in_size, out_size, bias=False) - for (in_size, out_size) in zip(in_sizes, sizes)]) - - def forward(self, x): - for linear in self.layers: - x = F.dropout(F.relu(linear(x)), p=0.5, training=True) - return x - - -class Postnet(nn.Module): - """Postnet - - Five 1-d convolution with 512 channels and kernel size 5 - """ - - def __init__(self, hparams): - super(Postnet, self).__init__() - self.convolutions = nn.ModuleList() - - self.convolutions.append( - nn.Sequential( - ConvNorm(hparams.n_mel_channels, hparams.postnet_embedding_dim, - kernel_size=hparams.postnet_kernel_size, stride=1, - padding=int((hparams.postnet_kernel_size - 1) / 2), - dilation=1, w_init_gain='tanh'), - nn.BatchNorm1d(hparams.postnet_embedding_dim)) - ) - - for i in range(1, hparams.postnet_n_convolutions - 1): - self.convolutions.append( - nn.Sequential( - ConvNorm(hparams.postnet_embedding_dim, - hparams.postnet_embedding_dim, - kernel_size=hparams.postnet_kernel_size, stride=1, - padding=int((hparams.postnet_kernel_size - 1) / 2), - dilation=1, w_init_gain='tanh'), - nn.BatchNorm1d(hparams.postnet_embedding_dim)) - ) - - self.convolutions.append( - nn.Sequential( - ConvNorm(hparams.postnet_embedding_dim, hparams.n_mel_channels, - kernel_size=hparams.postnet_kernel_size, stride=1, - padding=int((hparams.postnet_kernel_size - 1) / 2), - dilation=1, w_init_gain='linear'), - nn.BatchNorm1d(hparams.n_mel_channels)) - ) - - def forward(self, x): - for i in range(len(self.convolutions) - 1): - x = F.dropout(torch.tanh(self.convolutions[i](x)), 0.5, self.training) - x = F.dropout(self.convolutions[-1](x), 0.5, self.training) - - return x - - -class Encoder(nn.Module): - """Encoder module: - - Three 1-d convolution banks - - Bidirectional LSTM - """ - def __init__(self, hparams): - super(Encoder, self).__init__() - - convolutions = [] - for _ in range(hparams.encoder_n_convolutions): - conv_layer = nn.Sequential( - ConvNorm(hparams.encoder_embedding_dim, - hparams.encoder_embedding_dim, - kernel_size=hparams.encoder_kernel_size, stride=1, - padding=int((hparams.encoder_kernel_size - 1) / 2), - dilation=1, w_init_gain='relu'), - nn.BatchNorm1d(hparams.encoder_embedding_dim)) - convolutions.append(conv_layer) - self.convolutions = nn.ModuleList(convolutions) - - self.lstm = nn.LSTM(hparams.encoder_embedding_dim, - int(hparams.encoder_embedding_dim / 2), 1, - batch_first=True, bidirectional=True) - - def forward(self, x, input_lengths): - for conv in self.convolutions: - x = F.dropout(F.relu(conv(x)), 0.5, self.training) - - x = x.transpose(1, 2) - - # pytorch tensor are not reversible, hence the conversion - input_lengths = input_lengths.cpu().numpy() - x = nn.utils.rnn.pack_padded_sequence( - x, input_lengths, batch_first=True) - - self.lstm.flatten_parameters() - outputs, _ = self.lstm(x) - - outputs, _ = nn.utils.rnn.pad_packed_sequence( - outputs, batch_first=True) - - return outputs - - def inference(self, x): - for conv in self.convolutions: - x = F.dropout(F.relu(conv(x)), 0.5, self.training) - - x = x.transpose(1, 2) - - self.lstm.flatten_parameters() - outputs, _ = self.lstm(x) - - return outputs - - -class AudioEncoder(nn.Module): - def __init__(self, hparams): - super(AudioEncoder, self).__init__() - - assert hparams.lat_dim > 0 - - convolutions = [] - inp_dim = hparams.n_mel_channels - for _ in range(hparams.lat_n_convolutions): - conv_layer = nn.Sequential( - ConvNorm(inp_dim, hparams.lat_n_filters, - kernel_size=hparams.lat_kernel_size, stride=1, - padding=int((hparams.lat_kernel_size - 1) / 2), - dilation=1, w_init_gain='tanh'), - nn.BatchNorm1d(hparams.lat_n_filters)) - inp_dim = hparams.lat_n_filters - convolutions.append(conv_layer) - self.convolutions = nn.ModuleList(convolutions) - - self.lstm = nn.LSTM(hparams.lat_n_filters, - int(hparams.lat_n_filters / 2), - hparams.lat_n_blstms, batch_first=True, - bidirectional=True) - self.pool = GlobalAvgPool() - - self.mu_proj = LinearNorm(hparams.lat_n_filters, hparams.lat_dim) - self.logvar_proj = LinearNorm(hparams.lat_n_filters, hparams.lat_dim) - self.lat_dim = hparams.lat_dim - - def forward(self, x, lengths): - """ - Args: - x (torch.Tensor): (B, F, T) - """ - - for conv in self.convolutions: - x = F.dropout(F.tanh(conv(x)), 0.5, self.training) - - x = x.transpose(1, 2) # (B, T, D) - - # x may not be sorted by length. Sort->process->unsort - max_len = x.size(1) - assert max_len == torch.max(lengths).item() - - lengths, perm_idx = lengths.sort(0, descending=True) - x = x[perm_idx] - x = nn.utils.rnn.pack_padded_sequence(x, lengths, batch_first=True) - - self.lstm.flatten_parameters() - outputs, _ = self.lstm(x) - outputs, _ = nn.utils.rnn.pad_packed_sequence(outputs, batch_first=True) - - _, unperm_idx = perm_idx.sort(0) - outputs = outputs[unperm_idx] # (B, T, D) - lengths = lengths[unperm_idx] # (B, T, D) - - outputs = self.pool(outputs, lengths) # (B, D) - - mu = self.mu_proj(outputs) - logvar = self.logvar_proj(outputs) - z = distr.Normal(mu, logvar).rsample() - return z, mu, logvar - - -class Decoder(nn.Module): - def __init__(self, hparams): - super(Decoder, self).__init__() - self.n_mel_channels = hparams.n_mel_channels - self.n_frames_per_step = hparams.n_frames_per_step - self.encoder_embedding_dim = hparams.encoder_embedding_dim - self.obs_dim = hparams.obs_dim - self.lat_dim = hparams.lat_dim - self.attention_rnn_dim = hparams.attention_rnn_dim - self.decoder_rnn_dim = hparams.decoder_rnn_dim - self.prenet_dim = hparams.prenet_dim - self.max_decoder_steps = hparams.max_decoder_steps - self.gate_threshold = hparams.gate_threshold - self.p_attention_dropout = hparams.p_attention_dropout - self.p_decoder_dropout = hparams.p_decoder_dropout - - self.prenet = Prenet( - hparams.n_mel_channels * hparams.n_frames_per_step, - [hparams.prenet_dim, hparams.prenet_dim]) - - self.attention_rnn = nn.LSTMCell( - hparams.prenet_dim + hparams.encoder_embedding_dim, - hparams.attention_rnn_dim) - - self.attention_layer = Attention( - hparams.attention_rnn_dim, hparams.encoder_embedding_dim, - hparams.attention_dim, hparams.attention_location_n_filters, - hparams.attention_location_kernel_size) - - encoder_tot_dim = (hparams.encoder_embedding_dim + \ - hparams.lat_dim + hparams.obs_dim) - self.decoder_rnn = nn.LSTMCell( - hparams.attention_rnn_dim + encoder_tot_dim, - hparams.decoder_rnn_dim, 1) - - self.linear_projection = LinearNorm( - hparams.decoder_rnn_dim + encoder_tot_dim, - hparams.n_mel_channels * hparams.n_frames_per_step) - - self.gate_layer = LinearNorm( - hparams.decoder_rnn_dim + encoder_tot_dim, 1, - bias=True, w_init_gain='sigmoid') - - def get_go_frame(self, memory): - """ Gets all zeros frames to use as first decoder input - PARAMS - ------ - memory: decoder outputs - - RETURNS - ------- - decoder_input: all zeros frames - """ - B = memory.size(0) - decoder_input = Variable(memory.data.new( - B, self.n_mel_channels * self.n_frames_per_step).zero_()) - return decoder_input - - def initialize_decoder_states(self, memory, obs_and_lat, mask): - """ Initializes attention rnn states, decoder rnn states, attention - weights, attention cumulative weights, attention context, stores memory - and stores processed memory - PARAMS - ------ - memory: Encoder outputs - obs_and_lat: Observed and latent attribute embeddings - mask: Mask for padded data if training, expects None for inference - """ - B = memory.size(0) - MAX_TIME = memory.size(1) - - self.attention_hidden = Variable(memory.data.new( - B, self.attention_rnn_dim).zero_()) - self.attention_cell = Variable(memory.data.new( - B, self.attention_rnn_dim).zero_()) - - self.decoder_hidden = Variable(memory.data.new( - B, self.decoder_rnn_dim).zero_()) - self.decoder_cell = Variable(memory.data.new( - B, self.decoder_rnn_dim).zero_()) - - self.attention_weights = Variable(memory.data.new( - B, MAX_TIME).zero_()) - self.attention_weights_cum = Variable(memory.data.new( - B, MAX_TIME).zero_()) - self.attention_context = Variable(memory.data.new( - B, self.encoder_embedding_dim).zero_()) - - self.memory = memory - self.processed_memory = self.attention_layer.memory_layer(memory) - self.obs_and_lat = obs_and_lat - self.mask = mask - - def parse_decoder_inputs(self, decoder_inputs): - """ Prepares decoder inputs, i.e. mel outputs - PARAMS - ------ - decoder_inputs: inputs used for teacher-forced training, i.e. mel-specs - - RETURNS - ------- - inputs: processed decoder inputs - - """ - # (B, n_mel_channels, T_out) -> (B, T_out, n_mel_channels) - decoder_inputs = decoder_inputs.transpose(1, 2) - decoder_inputs = decoder_inputs.view( - decoder_inputs.size(0), - int(decoder_inputs.size(1)/self.n_frames_per_step), -1) - # (B, T_out, n_mel_channels) -> (T_out, B, n_mel_channels) - decoder_inputs = decoder_inputs.transpose(0, 1) - return decoder_inputs - - def parse_decoder_outputs(self, mel_outputs, gate_outputs, alignments): - """ Prepares decoder outputs for output - PARAMS - ------ - mel_outputs: - gate_outputs: gate output energies - alignments: - - RETURNS - ------- - mel_outputs: - gate_outpust: gate output energies - alignments: - """ - # (T_out, B) -> (B, T_out) - alignments = torch.stack(alignments).transpose(0, 1) - # (T_out, B) -> (B, T_out) - gate_outputs = torch.stack(gate_outputs).transpose(0, 1) - gate_outputs = gate_outputs.contiguous() - # (T_out, B, n_mel_channels) -> (B, T_out, n_mel_channels) - mel_outputs = torch.stack(mel_outputs).transpose(0, 1).contiguous() - # decouple frames per step - mel_outputs = mel_outputs.view( - mel_outputs.size(0), -1, self.n_mel_channels) - # (B, T_out, n_mel_channels) -> (B, n_mel_channels, T_out) - mel_outputs = mel_outputs.transpose(1, 2) - - return mel_outputs, gate_outputs, alignments - - def decode(self, decoder_input): - """ Decoder step using stored states, attention and memory - PARAMS - ------ - decoder_input: previous mel output - - RETURNS - ------- - mel_output: - gate_output: gate output energies - attention_weights: - """ - cell_input = torch.cat((decoder_input, self.attention_context), -1) - self.attention_hidden, self.attention_cell = self.attention_rnn( - cell_input, (self.attention_hidden, self.attention_cell)) - self.attention_hidden = F.dropout( - self.attention_hidden, self.p_attention_dropout, self.training) - - attention_weights_cat = torch.cat( - (self.attention_weights.unsqueeze(1), - self.attention_weights_cum.unsqueeze(1)), dim=1) - self.attention_context, self.attention_weights = self.attention_layer( - self.attention_hidden, self.memory, self.processed_memory, - attention_weights_cat, self.mask) - - self.attention_weights_cum += self.attention_weights - decoder_input = torch.cat( - (self.attention_hidden, self.attention_context), -1) - if self.obs_and_lat is not None: - decoder_input = torch.cat((decoder_input, self.obs_and_lat), -1) - self.decoder_hidden, self.decoder_cell = self.decoder_rnn( - decoder_input, (self.decoder_hidden, self.decoder_cell)) - self.decoder_hidden = F.dropout( - self.decoder_hidden, self.p_decoder_dropout, self.training) - - decoder_hidden_attention_context = torch.cat( - (self.decoder_hidden, self.attention_context), dim=1) - if self.obs_and_lat is not None: - decoder_hidden_attention_context = torch.cat( - (decoder_hidden_attention_context, self.obs_and_lat), dim=1) - decoder_output = self.linear_projection( - decoder_hidden_attention_context) - - gate_prediction = self.gate_layer(decoder_hidden_attention_context) - return decoder_output, gate_prediction, self.attention_weights - - def forward(self, memory, obs_and_lat, decoder_inputs, memory_lengths): - """ Decoder forward pass for training - PARAMS - ------ - memory: Encoder outputs - obs_and_lat: Observed and latent attribute embeddings - decoder_inputs: Decoder inputs for teacher forcing. i.e. mel-specs - memory_lengths: Encoder output lengths for attention masking. - - RETURNS - ------- - mel_outputs: mel outputs from the decoder - gate_outputs: gate outputs from the decoder - alignments: sequence of attention weights from the decoder - """ - - decoder_input = self.get_go_frame(memory).unsqueeze(0) - decoder_inputs = self.parse_decoder_inputs(decoder_inputs) - decoder_inputs = torch.cat((decoder_input, decoder_inputs), dim=0) - decoder_inputs = self.prenet(decoder_inputs) - - self.initialize_decoder_states( - memory, obs_and_lat, mask=~get_mask_from_lengths(memory_lengths)) - - mel_outputs, gate_outputs, alignments = [], [], [] - while len(mel_outputs) < decoder_inputs.size(0) - 1: - decoder_input = decoder_inputs[len(mel_outputs)] - mel_output, gate_output, attention_weights = self.decode( - decoder_input) - mel_outputs += [mel_output.squeeze(1)] - gate_outputs += [gate_output.squeeze()] - alignments += [attention_weights] - - mel_outputs, gate_outputs, alignments = self.parse_decoder_outputs( - mel_outputs, gate_outputs, alignments) - - return mel_outputs, gate_outputs, alignments - - def inference(self, memory, obs_and_lat, ret_has_eos=False): - """ Decoder inference - PARAMS - ------ - memory: Encoder outputs - obs_and_lat: Observed and latent attribute embeddings - - RETURNS - ------- - mel_outputs: mel outputs from the decoder - gate_outputs: gate outputs from the decoder - alignments: sequence of attention weights from the decoder - """ - decoder_input = self.get_go_frame(memory) - - self.initialize_decoder_states(memory, obs_and_lat, mask=None) - - mel_outputs, gate_outputs, alignments = [], [], [] - has_eos = False - while True: - decoder_input = self.prenet(decoder_input) - mel_output, gate_output, alignment = self.decode(decoder_input) - - mel_outputs += [mel_output.squeeze(1)] - gate_outputs += [gate_output] - alignments += [alignment] - - if torch.sigmoid(gate_output.data) > self.gate_threshold: - has_eos = True - break - elif len(mel_outputs) == self.max_decoder_steps: - # print("Warning! Reached max decoder steps") - break - - decoder_input = mel_output - - mel_outputs, gate_outputs, alignments = self.parse_decoder_outputs( - mel_outputs, gate_outputs, alignments) - - if ret_has_eos: - return mel_outputs, gate_outputs, alignments, has_eos - else: - return mel_outputs, gate_outputs, alignments - - -class Tacotron2(nn.Module): - def __init__(self, hparams): - super(Tacotron2, self).__init__() - self.mask_padding = hparams.mask_padding - self.fp16_run = hparams.fp16_run - self.n_mel_channels = hparams.n_mel_channels - self.n_frames_per_step = hparams.n_frames_per_step - - # initialize text encoder embedding - self.embedding = nn.Embedding( - hparams.n_symbols, hparams.symbols_embedding_dim) - std = sqrt(2.0 / (hparams.n_symbols + hparams.symbols_embedding_dim)) - val = sqrt(3.0) * std # uniform bounds for std - self.embedding.weight.data.uniform_(-val, val) - - # initialize observed attribute embedding - self.obs_embedding = None - if hparams.obs_dim > 0: - self.obs_embedding = nn.Embedding( - hparams.obs_n_class, hparams.obs_dim) - std = sqrt(2.0 / (hparams.obs_n_class + hparams.obs_dim)) - val = sqrt(3.0) * std # uniform bounds for std - self.obs_embedding.weight.data.uniform_(-val, val) - - self.encoder = Encoder(hparams) - self.decoder = Decoder(hparams) - self.postnet = Postnet(hparams) - - self.lat_encoder = None - if hparams.lat_dim > 0: - self.lat_encoder = AudioEncoder(hparams) - - def parse_batch(self, batch): - (text_padded, input_lengths, obs_labels, - mel_padded, gate_padded, output_lengths) = batch - text_padded = to_gpu(text_padded).long() - input_lengths = to_gpu(input_lengths).long() - obs_labels = to_gpu(obs_labels).long() - max_len = torch.max(input_lengths.data).item() - mel_padded = to_gpu(mel_padded).float() - gate_padded = to_gpu(gate_padded).float() - output_lengths = to_gpu(output_lengths).long() - - return ( - (text_padded, input_lengths, obs_labels, - mel_padded, max_len, output_lengths), - (mel_padded, gate_padded)) - - def parse_output(self, outputs, output_lengths=None): - if self.mask_padding and output_lengths is not None: - mask = ~get_mask_from_lengths(output_lengths) - mask = mask.expand(self.n_mel_channels, mask.size(0), mask.size(1)) - mask = mask.permute(1, 0, 2) - - outputs[0].data.masked_fill_(mask, 0.0) - outputs[1].data.masked_fill_(mask, 0.0) - outputs[2].data.masked_fill_(mask[:, 0, :], 1e3) # gate energies - - return outputs - - def forward(self, inputs): - (text_inputs, text_lengths, obs_labels, - mels, max_len, output_lengths) = inputs - text_lengths, output_lengths = text_lengths.data, output_lengths.data - - embedded_inputs = self.embedding(text_inputs).transpose(1, 2) - - encoder_outputs = self.encoder(embedded_inputs, text_lengths) - - obs = None - if self.obs_embedding is not None: - obs = self.obs_embedding(obs_labels) - - lat, lat_mu, lat_logvar = None, None, None - if self.lat_encoder is not None: - (lat, lat_mu, lat_logvar) = self.lat_encoder(mels, output_lengths) - - obs_and_lat = [x for x in [obs, lat] if x is not None] - if bool(obs_and_lat): - obs_and_lat = torch.cat(obs_and_lat, dim=-1) - else: - obs_and_lat = None - - mel_outputs, gate_outputs, alignments = self.decoder( - encoder_outputs, obs_and_lat, mels, memory_lengths=text_lengths) - - mel_outputs_postnet = self.postnet(mel_outputs) - mel_outputs_postnet = mel_outputs + mel_outputs_postnet - - return self.parse_output( - [mel_outputs, mel_outputs_postnet, gate_outputs, alignments, - lat_mu, lat_logvar], - output_lengths) - - def inference(self, inputs, obs_labels=None, lat=None, ret_has_eos=False): - embedded_inputs = self.embedding(inputs).transpose(1, 2) - encoder_outputs = self.encoder.inference(embedded_inputs) - - if obs_labels is None: - obs_labels = torch.LongTensor(len(inputs)) - obs_labels = obs_labels.to(inputs.device).zero_() - - obs = None - if self.obs_embedding is not None: - obs = self.obs_embedding(obs_labels) - - if self.lat_encoder is not None: - if lat is None: - lat = torch.FloatTensor(len(inputs), self.lat_encoder.lat_dim) - lat = lat.to(inputs.device).zero_().type(encoder_outputs.type()) - - obs_and_lat = [x for x in [obs, lat] if x is not None] - if bool(obs_and_lat): - obs_and_lat = torch.cat(obs_and_lat, dim=-1) - else: - obs_and_lat = None - - mel_outputs, gate_outputs, alignments, has_eos = self.decoder.inference( - encoder_outputs, obs_and_lat, ret_has_eos=True) - - mel_outputs_postnet = self.postnet(mel_outputs) - mel_outputs_postnet = mel_outputs + mel_outputs_postnet - - outputs = self.parse_output( - [mel_outputs, mel_outputs_postnet, gate_outputs, alignments]) - - if ret_has_eos: - return outputs + [has_eos] - else: - return outputs diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/lightconv_lm.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/lightconv_lm.py deleted file mode 100644 index 1d9efc4e42a5ecc1b83338055f18ade5a83ea666..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/lightconv_lm.py +++ /dev/null @@ -1,306 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq import utils -from fairseq.models import ( - FairseqLanguageModel, - register_model, - register_model_architecture, -) -from fairseq.models.lightconv import Embedding, LightConvDecoder -from fairseq.modules import AdaptiveInput, CharacterTokenEmbedder - - -@register_model("lightconv_lm") -class LightConvLanguageModel(FairseqLanguageModel): - def __init__(self, decoder): - super().__init__(decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--dropout", - default=0.1, - type=float, - metavar="D", - help="dropout probability", - ) - parser.add_argument( - "--attention-dropout", - default=0.0, - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--relu-dropout", - default=0.0, - type=float, - metavar="D", - help="dropout probability after ReLU in FFN", - ) - parser.add_argument( - "--input-dropout", - type=float, - metavar="D", - help="dropout probability of the inputs", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-output-dim", - type=int, - metavar="N", - help="decoder output dimension", - ) - parser.add_argument( - "--decoder-input-dim", type=int, metavar="N", help="decoder input dimension" - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads or LightConv/DynamicConv heads", - ) - parser.add_argument( - "--decoder-normalize-before", - default=False, - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--adaptive-softmax-cutoff", - metavar="EXPR", - help="comma separated list of adaptive softmax cutoff points. " - "Must be used with adaptive_loss criterion", - ) - parser.add_argument( - "--adaptive-softmax-dropout", - type=float, - metavar="D", - help="sets adaptive softmax dropout for the tail projections", - ) - parser.add_argument( - "--adaptive-softmax-factor", - type=float, - metavar="N", - help="adaptive input factor", - ) - parser.add_argument( - "--no-token-positional-embeddings", - default=False, - action="store_true", - help="if set, disables positional embeddings (outside self attention)", - ) - parser.add_argument( - "--share-decoder-input-output-embed", - default=False, - action="store_true", - help="share decoder input and output embeddings", - ) - parser.add_argument( - "--character-embeddings", - default=False, - action="store_true", - help="if set, uses character embedding convolutions to produce token embeddings", - ) - parser.add_argument( - "--character-filters", - type=str, - metavar="LIST", - default="[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]", - help="size of character embeddings", - ) - parser.add_argument( - "--character-embedding-dim", - type=int, - metavar="N", - default=4, - help="size of character embeddings", - ) - parser.add_argument( - "--char-embedder-highway-layers", - type=int, - metavar="N", - default=2, - help="number of highway layers for character token embeddder", - ) - parser.add_argument( - "--adaptive-input", - default=False, - action="store_true", - help="if set, uses adaptive input", - ) - parser.add_argument( - "--adaptive-input-factor", - type=float, - metavar="N", - help="adaptive input factor", - ) - parser.add_argument( - "--adaptive-input-cutoff", - metavar="EXPR", - help="comma separated list of adaptive input cutoff points.", - ) - parser.add_argument( - "--tie-adaptive-weights", - action="store_true", - help="if set, ties the weights of adaptive softmax and adaptive input", - ) - parser.add_argument( - "--tie-adaptive-proj", - action="store_true", - help="if set, ties the projection weights of adaptive softmax and adaptive input", - ) - parser.add_argument( - "--decoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the decoder", - ) - - """LightConv and DynamicConv arguments""" - parser.add_argument( - "--decoder-kernel-size-list", - type=lambda x: utils.eval_str_list(x, int), - help='list of kernel size (default: "[3,7,15,31,31,31]")', - ) - parser.add_argument( - "--decoder-glu", type=utils.eval_bool, help="glu after in proj" - ) - parser.add_argument( - "--decoder-conv-type", - default="dynamic", - type=str, - choices=["dynamic", "lightweight"], - help="type of convolution", - ) - parser.add_argument("--weight-softmax", default=True, type=utils.eval_bool) - parser.add_argument( - "--weight-dropout", - type=float, - metavar="D", - help="dropout probability for conv weights", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_lm_architecture(args) - - if getattr(args, "max_source_positions", None) is None: - args.max_source_positions = args.tokens_per_sample - if getattr(args, "max_target_positions", None) is None: - args.max_target_positions = args.tokens_per_sample - - if args.character_embeddings: - embed_tokens = CharacterTokenEmbedder( - task.dictionary, - eval(args.character_filters), - args.character_embedding_dim, - args.decoder_embed_dim, - args.char_embedder_highway_layers, - ) - elif args.adaptive_input: - embed_tokens = AdaptiveInput( - len(task.dictionary), - task.dictionary.pad(), - args.decoder_input_dim, - args.adaptive_input_factor, - args.decoder_embed_dim, - utils.eval_str_list(args.adaptive_input_cutoff, type=int), - ) - else: - embed_tokens = Embedding( - len(task.dictionary), args.decoder_input_dim, task.dictionary.pad() - ) - - if args.tie_adaptive_weights: - assert args.adaptive_input - assert args.adaptive_input_factor == args.adaptive_softmax_factor - assert ( - args.adaptive_softmax_cutoff == args.adaptive_input_cutoff - ), "{} != {}".format( - args.adaptive_softmax_cutoff, args.adaptive_input_cutoff - ) - assert args.decoder_input_dim == args.decoder_output_dim - - decoder = LightConvDecoder( - args, - task.output_dictionary, - embed_tokens, - no_encoder_attn=True, - final_norm=False, - ) - return LightConvLanguageModel(decoder) - - -@register_model_architecture("lightconv_lm", "lightconv_lm") -def base_lm_architecture(args): - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 2048) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.adaptive_softmax_factor = getattr(args, "adaptive_softmax_factor", 4) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - - args.character_embeddings = getattr(args, "character_embeddings", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - args.decoder_conv_dim = getattr(args, "decoder_conv_dim", args.decoder_embed_dim) - - # The model training is not stable without this - args.decoder_normalize_before = True - - args.adaptive_input = getattr(args, "adaptive_input", False) - args.adaptive_input_factor = getattr(args, "adaptive_input_factor", 4) - args.adaptive_input_cutoff = getattr(args, "adaptive_input_cutoff", None) - - args.tie_adaptive_weights = getattr(args, "tie_adaptive_weights", False) - args.tie_adaptive_proj = getattr(args, "tie_adaptive_proj", False) - - args.decoder_kernel_size_list = getattr( - args, "decoder_kernel_size_list", [3, 7, 15, 31, 31, 31] - ) - if len(args.decoder_kernel_size_list) == 1: - args.decoder_kernel_size_list = ( - args.decoder_kernel_size_list * args.decoder_layers - ) - assert ( - len(args.decoder_kernel_size_list) == args.decoder_layers - ), "decoder_kernel_size_list doesn't match decoder_layers" - args.decoder_glu = getattr(args, "decoder_glu", True) - args.input_dropout = getattr(args, "input_dropout", 0.1) - args.weight_dropout = getattr(args, "weight_dropout", args.attention_dropout) - - -@register_model_architecture("lightconv_lm", "lightconv_lm_gbw") -def lightconv_lm_gbw(args): - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - base_lm_architecture(args) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/preprocessing/denoise_and_vad_audio.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/preprocessing/denoise_and_vad_audio.py deleted file mode 100644 index 4e13b38a5d3fb44dd3969e6afcb8f202274ee3b7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/preprocessing/denoise_and_vad_audio.py +++ /dev/null @@ -1,204 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import os -import csv -import tempfile -from collections import defaultdict -from pathlib import Path - -import torchaudio -try: - import webrtcvad -except ImportError: - raise ImportError("Please install py-webrtcvad: pip install webrtcvad") -import pandas as pd -from tqdm import tqdm - -from examples.speech_synthesis.preprocessing.denoiser.pretrained import master64 -import examples.speech_synthesis.preprocessing.denoiser.utils as utils -from examples.speech_synthesis.preprocessing.vad import ( - frame_generator, vad_collector, read_wave, write_wave, FS_MS, THRESHOLD, - SCALE -) -from examples.speech_to_text.data_utils import save_df_to_tsv - - -log = logging.getLogger(__name__) - -PATHS = ["after_denoise", "after_vad"] -MIN_T = 0.05 - - -def generate_tmp_filename(extension="txt"): - return tempfile._get_default_tempdir() + "/" + \ - next(tempfile._get_candidate_names()) + "." + extension - - -def convert_sr(inpath, sr, output_path=None): - if not output_path: - output_path = generate_tmp_filename("wav") - cmd = f"sox {inpath} -r {sr} {output_path}" - os.system(cmd) - return output_path - - -def apply_vad(vad, inpath): - audio, sample_rate = read_wave(inpath) - frames = frame_generator(FS_MS, audio, sample_rate) - frames = list(frames) - segments = vad_collector(sample_rate, FS_MS, 300, vad, frames) - merge_segments = list() - timestamp_start = 0.0 - timestamp_end = 0.0 - # removing start, end, and long sequences of sils - for i, segment in enumerate(segments): - merge_segments.append(segment[0]) - if i and timestamp_start: - sil_duration = segment[1] - timestamp_end - if sil_duration > THRESHOLD: - merge_segments.append(int(THRESHOLD / SCALE) * (b'\x00')) - else: - merge_segments.append(int((sil_duration / SCALE)) * (b'\x00')) - timestamp_start = segment[1] - timestamp_end = segment[2] - segment = b''.join(merge_segments) - return segment, sample_rate - - -def write(wav, filename, sr=16_000): - # Normalize audio if it prevents clipping - wav = wav / max(wav.abs().max().item(), 1) - torchaudio.save(filename, wav.cpu(), sr, encoding="PCM_S", - bits_per_sample=16) - - -def process(args): - # making sure we are requested either denoise or vad - if not args.denoise and not args.vad: - log.error("No denoise or vad is requested.") - return - - log.info("Creating out directories...") - if args.denoise: - out_denoise = Path(args.output_dir).absolute().joinpath(PATHS[0]) - out_denoise.mkdir(parents=True, exist_ok=True) - if args.vad: - out_vad = Path(args.output_dir).absolute().joinpath(PATHS[1]) - out_vad.mkdir(parents=True, exist_ok=True) - - log.info("Loading pre-trained speech enhancement model...") - model = master64().to(args.device) - - log.info("Building the VAD model...") - vad = webrtcvad.Vad(int(args.vad_agg_level)) - - # preparing the output dict - output_dict = defaultdict(list) - - log.info(f"Parsing input manifest: {args.audio_manifest}") - with open(args.audio_manifest, "r") as f: - manifest_dict = csv.DictReader(f, delimiter="\t") - for row in tqdm(manifest_dict): - filename = str(row["audio"]) - - final_output = filename - keep_sample = True - n_frames = row["n_frames"] - snr = -1 - if args.denoise: - output_path_denoise = out_denoise.joinpath(Path(filename).name) - # convert to 16khz in case we use a differet sr - tmp_path = convert_sr(final_output, 16000) - - # loading audio file and generating the enhanced version - out, sr = torchaudio.load(tmp_path) - out = out.to(args.device) - estimate = model(out) - estimate = (1 - args.dry_wet) * estimate + args.dry_wet * out - write(estimate[0], str(output_path_denoise), sr) - - snr = utils.cal_snr(out, estimate) - snr = snr.cpu().detach().numpy()[0][0] - final_output = str(output_path_denoise) - - if args.vad: - output_path_vad = out_vad.joinpath(Path(filename).name) - sr = torchaudio.info(final_output).sample_rate - if sr in [16000, 32000, 48000]: - tmp_path = final_output - elif sr < 16000: - tmp_path = convert_sr(final_output, 16000) - elif sr < 32000: - tmp_path = convert_sr(final_output, 32000) - else: - tmp_path = convert_sr(final_output, 48000) - # apply VAD - segment, sample_rate = apply_vad(vad, tmp_path) - if len(segment) < sample_rate * MIN_T: - keep_sample = False - print(( - f"WARNING: skip {filename} because it is too short " - f"after VAD ({len(segment) / sample_rate} < {MIN_T})" - )) - else: - if sample_rate != sr: - tmp_path = generate_tmp_filename("wav") - write_wave(tmp_path, segment, sample_rate) - convert_sr(tmp_path, sr, - output_path=str(output_path_vad)) - else: - write_wave(str(output_path_vad), segment, sample_rate) - final_output = str(output_path_vad) - segment, _ = torchaudio.load(final_output) - n_frames = segment.size(1) - - if keep_sample: - output_dict["id"].append(row["id"]) - output_dict["audio"].append(final_output) - output_dict["n_frames"].append(n_frames) - output_dict["tgt_text"].append(row["tgt_text"]) - output_dict["speaker"].append(row["speaker"]) - output_dict["src_text"].append(row["src_text"]) - output_dict["snr"].append(snr) - - out_tsv_path = Path(args.output_dir) / Path(args.audio_manifest).name - log.info(f"Saving manifest to {out_tsv_path.as_posix()}") - save_df_to_tsv(pd.DataFrame.from_dict(output_dict), out_tsv_path) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--audio-manifest", "-i", required=True, - type=str, help="path to the input manifest.") - parser.add_argument( - "--output-dir", "-o", required=True, type=str, - help="path to the output dir. it will contain files after denoising and" - " vad" - ) - parser.add_argument("--vad-agg-level", "-a", type=int, default=2, - help="the aggresive level of the vad [0-3].") - parser.add_argument( - "--dry-wet", "-dw", type=float, default=0.01, - help="the level of linear interpolation between noisy and enhanced " - "files." - ) - parser.add_argument( - "--device", "-d", type=str, default="cpu", - help="the device to be used for the speech enhancement model: " - "cpu | cuda." - ) - parser.add_argument("--denoise", action="store_true", - help="apply a denoising") - parser.add_argument("--vad", action="store_true", help="apply a VAD") - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/fastspeech2_loss.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/fastspeech2_loss.py deleted file mode 100644 index 085d5628d4c4c242edee4aa3bc4a01aa4582eb21..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/fastspeech2_loss.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -from typing import List, Dict, Any -from dataclasses import dataclass, field - -import torch -import torch.nn.functional as F - -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from fairseq.data.data_utils import lengths_to_mask -from fairseq.models.fairseq_model import FairseqEncoderModel - - -@dataclass -class FastSpeech2CriterionConfig(FairseqDataclass): - ctc_weight: float = field( - default=0.0, metadata={"help": "weight for CTC loss"} - ) - - -@register_criterion("fastspeech2", dataclass=FastSpeech2CriterionConfig) -class FastSpeech2Loss(FairseqCriterion): - def __init__(self, task, ctc_weight): - super().__init__(task) - self.ctc_weight = ctc_weight - - def forward(self, model: FairseqEncoderModel, sample, reduction="mean"): - src_tokens = sample["net_input"]["src_tokens"] - src_lens = sample["net_input"]["src_lengths"] - tgt_lens = sample["target_lengths"] - _feat_out, _, log_dur_out, pitch_out, energy_out = model( - src_tokens=src_tokens, - src_lengths=src_lens, - prev_output_tokens=sample["net_input"]["prev_output_tokens"], - incremental_state=None, - target_lengths=tgt_lens, - speaker=sample["speaker"], - durations=sample["durations"], - pitches=sample["pitches"], - energies=sample["energies"] - ) - - src_mask = lengths_to_mask(sample["net_input"]["src_lengths"]) - tgt_mask = lengths_to_mask(sample["target_lengths"]) - - pitches, energies = sample["pitches"], sample["energies"] - pitch_out, pitches = pitch_out[src_mask], pitches[src_mask] - energy_out, energies = energy_out[src_mask], energies[src_mask] - - feat_out, feat = _feat_out[tgt_mask], sample["target"][tgt_mask] - l1_loss = F.l1_loss(feat_out, feat, reduction=reduction) - - pitch_loss = F.mse_loss(pitch_out, pitches, reduction=reduction) - energy_loss = F.mse_loss(energy_out, energies, reduction=reduction) - - log_dur_out = log_dur_out[src_mask] - dur = sample["durations"].float() - dur = dur.half() if log_dur_out.type().endswith(".HalfTensor") else dur - log_dur = torch.log(dur + 1)[src_mask] - dur_loss = F.mse_loss(log_dur_out, log_dur, reduction=reduction) - - ctc_loss = torch.tensor(0.).type_as(l1_loss) - if self.ctc_weight > 0.: - lprobs = model.get_normalized_probs((_feat_out,), log_probs=True) - lprobs = lprobs.transpose(0, 1) # T x B x C - src_mask = lengths_to_mask(src_lens) - src_tokens_flat = src_tokens.masked_select(src_mask) - ctc_loss = F.ctc_loss( - lprobs, src_tokens_flat, tgt_lens, src_lens, - reduction=reduction, zero_infinity=True - ) * self.ctc_weight - - loss = l1_loss + dur_loss + pitch_loss + energy_loss + ctc_loss - - sample_size = sample["nsentences"] - logging_output = { - "loss": utils.item(loss.data), - "ntokens": sample["ntokens"], - "nsentences": sample["nsentences"], - "sample_size": sample_size, - "l1_loss": utils.item(l1_loss.data), - "dur_loss": utils.item(dur_loss.data), - "pitch_loss": utils.item(pitch_loss.data), - "energy_loss": utils.item(energy_loss.data), - "ctc_loss": utils.item(ctc_loss.data), - } - return loss, sample_size, logging_output - - @classmethod - def reduce_metrics(cls, logging_outputs: List[Dict[str, Any]]) -> None: - ns = [log.get("sample_size", 0) for log in logging_outputs] - ntot = sum(ns) - ws = [n / (ntot + 1e-8) for n in ns] - for key in [ - "loss", "l1_loss", "dur_loss", "pitch_loss", "energy_loss", - "ctc_loss" - ]: - vals = [log.get(key, 0) for log in logging_outputs] - val = sum(val * w for val, w in zip(vals, ws)) - metrics.log_scalar(key, val, ntot, round=3) - metrics.log_scalar("sample_size", ntot, len(logging_outputs)) - - # inference metrics - if "targ_frames" not in logging_outputs[0]: - return - n = sum(log.get("targ_frames", 0) for log in logging_outputs) - for key, new_key in [ - ("mcd_loss", "mcd_loss"), - ("pred_frames", "pred_ratio"), - ("nins", "ins_rate"), - ("ndel", "del_rate"), - ]: - val = sum(log.get(key, 0) for log in logging_outputs) - metrics.log_scalar(new_key, val / n, n, round=3) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - return False diff --git a/spaces/Onekee/ehartford-Wizard-Vicuna-13B-Uncensored/app.py b/spaces/Onekee/ehartford-Wizard-Vicuna-13B-Uncensored/app.py deleted file mode 100644 index f3190d64a57580067329da895fcba8afd3f3d2b9..0000000000000000000000000000000000000000 --- a/spaces/Onekee/ehartford-Wizard-Vicuna-13B-Uncensored/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/ehartford/Wizard-Vicuna-13B-Uncensored").launch() \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/samplers/distributed_sampler.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/samplers/distributed_sampler.py deleted file mode 100644 index a098e6ac07c1b193fddcb69e6e54aced82e6081c..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/samplers/distributed_sampler.py +++ /dev/null @@ -1,278 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import itertools -import logging -import math -from collections import defaultdict -from typing import Optional -import torch -from torch.utils.data.sampler import Sampler - -from detectron2.utils import comm - -logger = logging.getLogger(__name__) - - -class TrainingSampler(Sampler): - """ - In training, we only care about the "infinite stream" of training data. - So this sampler produces an infinite stream of indices and - all workers cooperate to correctly shuffle the indices and sample different indices. - - The samplers in each worker effectively produces `indices[worker_id::num_workers]` - where `indices` is an infinite stream of indices consisting of - `shuffle(range(size)) + shuffle(range(size)) + ...` (if shuffle is True) - or `range(size) + range(size) + ...` (if shuffle is False) - - Note that this sampler does not shard based on pytorch DataLoader worker id. - A sampler passed to pytorch DataLoader is used only with map-style dataset - and will not be executed inside workers. - But if this sampler is used in a way that it gets execute inside a dataloader - worker, then extra work needs to be done to shard its outputs based on worker id. - This is required so that workers don't produce identical data. - :class:`ToIterableDataset` implements this logic. - This note is true for all samplers in detectron2. - """ - - def __init__(self, size: int, shuffle: bool = True, seed: Optional[int] = None): - """ - Args: - size (int): the total number of data of the underlying dataset to sample from - shuffle (bool): whether to shuffle the indices or not - seed (int): the initial seed of the shuffle. Must be the same - across all workers. If None, will use a random seed shared - among workers (require synchronization among all workers). - """ - if not isinstance(size, int): - raise TypeError(f"TrainingSampler(size=) expects an int. Got type {type(size)}.") - if size <= 0: - raise ValueError(f"TrainingSampler(size=) expects a positive int. Got {size}.") - self._size = size - self._shuffle = shuffle - if seed is None: - seed = comm.shared_random_seed() - self._seed = int(seed) - - self._rank = comm.get_rank() - self._world_size = comm.get_world_size() - - def __iter__(self): - start = self._rank - yield from itertools.islice(self._infinite_indices(), start, None, self._world_size) - - def _infinite_indices(self): - g = torch.Generator() - g.manual_seed(self._seed) - while True: - if self._shuffle: - yield from torch.randperm(self._size, generator=g).tolist() - else: - yield from torch.arange(self._size).tolist() - - -class RandomSubsetTrainingSampler(TrainingSampler): - """ - Similar to TrainingSampler, but only sample a random subset of indices. - This is useful when you want to estimate the accuracy vs data-number curves by - training the model with different subset_ratio. - """ - - def __init__( - self, - size: int, - subset_ratio: float, - shuffle: bool = True, - seed_shuffle: Optional[int] = None, - seed_subset: Optional[int] = None, - ): - """ - Args: - size (int): the total number of data of the underlying dataset to sample from - subset_ratio (float): the ratio of subset data to sample from the underlying dataset - shuffle (bool): whether to shuffle the indices or not - seed_shuffle (int): the initial seed of the shuffle. Must be the same - across all workers. If None, will use a random seed shared - among workers (require synchronization among all workers). - seed_subset (int): the seed to randomize the subset to be sampled. - Must be the same across all workers. If None, will use a random seed shared - among workers (require synchronization among all workers). - """ - super().__init__(size=size, shuffle=shuffle, seed=seed_shuffle) - - assert 0.0 < subset_ratio <= 1.0 - self._size_subset = int(size * subset_ratio) - assert self._size_subset > 0 - if seed_subset is None: - seed_subset = comm.shared_random_seed() - self._seed_subset = int(seed_subset) - - # randomly generate the subset indexes to be sampled from - g = torch.Generator() - g.manual_seed(self._seed_subset) - indexes_randperm = torch.randperm(self._size, generator=g) - self._indexes_subset = indexes_randperm[: self._size_subset] - - logger.info("Using RandomSubsetTrainingSampler......") - logger.info(f"Randomly sample {self._size_subset} data from the original {self._size} data") - - def _infinite_indices(self): - g = torch.Generator() - g.manual_seed(self._seed) # self._seed equals seed_shuffle from __init__() - while True: - if self._shuffle: - # generate a random permutation to shuffle self._indexes_subset - randperm = torch.randperm(self._size_subset, generator=g) - yield from self._indexes_subset[randperm].tolist() - else: - yield from self._indexes_subset.tolist() - - -class RepeatFactorTrainingSampler(Sampler): - """ - Similar to TrainingSampler, but a sample may appear more times than others based - on its "repeat factor". This is suitable for training on class imbalanced datasets like LVIS. - """ - - def __init__(self, repeat_factors, *, shuffle=True, seed=None): - """ - Args: - repeat_factors (Tensor): a float vector, the repeat factor for each indice. When it's - full of ones, it is equivalent to ``TrainingSampler(len(repeat_factors), ...)``. - shuffle (bool): whether to shuffle the indices or not - seed (int): the initial seed of the shuffle. Must be the same - across all workers. If None, will use a random seed shared - among workers (require synchronization among all workers). - """ - self._shuffle = shuffle - if seed is None: - seed = comm.shared_random_seed() - self._seed = int(seed) - - self._rank = comm.get_rank() - self._world_size = comm.get_world_size() - - # Split into whole number (_int_part) and fractional (_frac_part) parts. - self._int_part = torch.trunc(repeat_factors) - self._frac_part = repeat_factors - self._int_part - - @staticmethod - def repeat_factors_from_category_frequency(dataset_dicts, repeat_thresh): - """ - Compute (fractional) per-image repeat factors based on category frequency. - The repeat factor for an image is a function of the frequency of the rarest - category labeled in that image. The "frequency of category c" in [0, 1] is defined - as the fraction of images in the training set (without repeats) in which category c - appears. - See :paper:`lvis` (>= v2) Appendix B.2. - - Args: - dataset_dicts (list[dict]): annotations in Detectron2 dataset format. - repeat_thresh (float): frequency threshold below which data is repeated. - If the frequency is half of `repeat_thresh`, the image will be - repeated twice. - - Returns: - torch.Tensor: - the i-th element is the repeat factor for the dataset image at index i. - """ - # 1. For each category c, compute the fraction of images that contain it: f(c) - category_freq = defaultdict(int) - for dataset_dict in dataset_dicts: # For each image (without repeats) - cat_ids = {ann["category_id"] for ann in dataset_dict["annotations"]} - for cat_id in cat_ids: - category_freq[cat_id] += 1 - num_images = len(dataset_dicts) - for k, v in category_freq.items(): - category_freq[k] = v / num_images - - # 2. For each category c, compute the category-level repeat factor: - # r(c) = max(1, sqrt(t / f(c))) - category_rep = { - cat_id: max(1.0, math.sqrt(repeat_thresh / cat_freq)) - for cat_id, cat_freq in category_freq.items() - } - - # 3. For each image I, compute the image-level repeat factor: - # r(I) = max_{c in I} r(c) - rep_factors = [] - for dataset_dict in dataset_dicts: - cat_ids = {ann["category_id"] for ann in dataset_dict["annotations"]} - rep_factor = max({category_rep[cat_id] for cat_id in cat_ids}, default=1.0) - rep_factors.append(rep_factor) - - return torch.tensor(rep_factors, dtype=torch.float32) - - def _get_epoch_indices(self, generator): - """ - Create a list of dataset indices (with repeats) to use for one epoch. - - Args: - generator (torch.Generator): pseudo random number generator used for - stochastic rounding. - - Returns: - torch.Tensor: list of dataset indices to use in one epoch. Each index - is repeated based on its calculated repeat factor. - """ - # Since repeat factors are fractional, we use stochastic rounding so - # that the target repeat factor is achieved in expectation over the - # course of training - rands = torch.rand(len(self._frac_part), generator=generator) - rep_factors = self._int_part + (rands < self._frac_part).float() - # Construct a list of indices in which we repeat images as specified - indices = [] - for dataset_index, rep_factor in enumerate(rep_factors): - indices.extend([dataset_index] * int(rep_factor.item())) - return torch.tensor(indices, dtype=torch.int64) - - def __iter__(self): - start = self._rank - yield from itertools.islice(self._infinite_indices(), start, None, self._world_size) - - def _infinite_indices(self): - g = torch.Generator() - g.manual_seed(self._seed) - while True: - # Sample indices with repeats determined by stochastic rounding; each - # "epoch" may have a slightly different size due to the rounding. - indices = self._get_epoch_indices(g) - if self._shuffle: - randperm = torch.randperm(len(indices), generator=g) - yield from indices[randperm].tolist() - else: - yield from indices.tolist() - - -class InferenceSampler(Sampler): - """ - Produce indices for inference across all workers. - Inference needs to run on the __exact__ set of samples, - therefore when the total number of samples is not divisible by the number of workers, - this sampler produces different number of samples on different workers. - """ - - def __init__(self, size: int): - """ - Args: - size (int): the total number of data of the underlying dataset to sample from - """ - self._size = size - assert size > 0 - self._rank = comm.get_rank() - self._world_size = comm.get_world_size() - self._local_indices = self._get_local_indices(size, self._world_size, self._rank) - - @staticmethod - def _get_local_indices(total_size, world_size, rank): - shard_size = total_size // world_size - left = total_size % world_size - shard_sizes = [shard_size + int(r < left) for r in range(world_size)] - - begin = sum(shard_sizes[:rank]) - end = min(sum(shard_sizes[: rank + 1]), total_size) - return range(begin, end) - - def __iter__(self): - yield from self._local_indices - - def __len__(self): - return len(self._local_indices) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/regnet.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/regnet.py deleted file mode 100644 index 3533d63385d1324cfc1559eae9576b3fa52585af..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/regnet.py +++ /dev/null @@ -1,452 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Implementation of RegNet models from :paper:`dds` and :paper:`scaling`. - -This code is adapted from https://github.com/facebookresearch/pycls with minimal modifications. -Some code duplication exists between RegNet and ResNets (e.g., ResStem) in order to simplify -model loading. -""" - -import numpy as np -from torch import nn - -from detectron2.layers import CNNBlockBase, ShapeSpec, get_norm - -from .backbone import Backbone - -__all__ = [ - "AnyNet", - "RegNet", - "ResStem", - "SimpleStem", - "VanillaBlock", - "ResBasicBlock", - "ResBottleneckBlock", -] - - -def conv2d(w_in, w_out, k, *, stride=1, groups=1, bias=False): - """Helper for building a conv2d layer.""" - assert k % 2 == 1, "Only odd size kernels supported to avoid padding issues." - s, p, g, b = stride, (k - 1) // 2, groups, bias - return nn.Conv2d(w_in, w_out, k, stride=s, padding=p, groups=g, bias=b) - - -def gap2d(): - """Helper for building a global average pooling layer.""" - return nn.AdaptiveAvgPool2d((1, 1)) - - -def pool2d(k, *, stride=1): - """Helper for building a pool2d layer.""" - assert k % 2 == 1, "Only odd size kernels supported to avoid padding issues." - return nn.MaxPool2d(k, stride=stride, padding=(k - 1) // 2) - - -def init_weights(m): - """Performs ResNet-style weight initialization.""" - if isinstance(m, nn.Conv2d): - # Note that there is no bias due to BN - fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - m.weight.data.normal_(mean=0.0, std=np.sqrt(2.0 / fan_out)) - elif isinstance(m, nn.BatchNorm2d): - m.weight.data.fill_(1.0) - m.bias.data.zero_() - elif isinstance(m, nn.Linear): - m.weight.data.normal_(mean=0.0, std=0.01) - m.bias.data.zero_() - - -class ResStem(CNNBlockBase): - """ResNet stem for ImageNet: 7x7, BN, AF, MaxPool.""" - - def __init__(self, w_in, w_out, norm, activation_class): - super().__init__(w_in, w_out, 4) - self.conv = conv2d(w_in, w_out, 7, stride=2) - self.bn = get_norm(norm, w_out) - self.af = activation_class() - self.pool = pool2d(3, stride=2) - - def forward(self, x): - for layer in self.children(): - x = layer(x) - return x - - -class SimpleStem(CNNBlockBase): - """Simple stem for ImageNet: 3x3, BN, AF.""" - - def __init__(self, w_in, w_out, norm, activation_class): - super().__init__(w_in, w_out, 2) - self.conv = conv2d(w_in, w_out, 3, stride=2) - self.bn = get_norm(norm, w_out) - self.af = activation_class() - - def forward(self, x): - for layer in self.children(): - x = layer(x) - return x - - -class SE(nn.Module): - """Squeeze-and-Excitation (SE) block: AvgPool, FC, Act, FC, Sigmoid.""" - - def __init__(self, w_in, w_se, activation_class): - super().__init__() - self.avg_pool = gap2d() - self.f_ex = nn.Sequential( - conv2d(w_in, w_se, 1, bias=True), - activation_class(), - conv2d(w_se, w_in, 1, bias=True), - nn.Sigmoid(), - ) - - def forward(self, x): - return x * self.f_ex(self.avg_pool(x)) - - -class VanillaBlock(CNNBlockBase): - """Vanilla block: [3x3 conv, BN, Relu] x2.""" - - def __init__(self, w_in, w_out, stride, norm, activation_class, _params): - super().__init__(w_in, w_out, stride) - self.a = conv2d(w_in, w_out, 3, stride=stride) - self.a_bn = get_norm(norm, w_out) - self.a_af = activation_class() - self.b = conv2d(w_out, w_out, 3) - self.b_bn = get_norm(norm, w_out) - self.b_af = activation_class() - - def forward(self, x): - for layer in self.children(): - x = layer(x) - return x - - -class BasicTransform(nn.Module): - """Basic transformation: [3x3 conv, BN, Relu] x2.""" - - def __init__(self, w_in, w_out, stride, norm, activation_class, _params): - super().__init__() - self.a = conv2d(w_in, w_out, 3, stride=stride) - self.a_bn = get_norm(norm, w_out) - self.a_af = activation_class() - self.b = conv2d(w_out, w_out, 3) - self.b_bn = get_norm(norm, w_out) - self.b_bn.final_bn = True - - def forward(self, x): - for layer in self.children(): - x = layer(x) - return x - - -class ResBasicBlock(CNNBlockBase): - """Residual basic block: x + f(x), f = basic transform.""" - - def __init__(self, w_in, w_out, stride, norm, activation_class, params): - super().__init__(w_in, w_out, stride) - self.proj, self.bn = None, None - if (w_in != w_out) or (stride != 1): - self.proj = conv2d(w_in, w_out, 1, stride=stride) - self.bn = get_norm(norm, w_out) - self.f = BasicTransform(w_in, w_out, stride, norm, activation_class, params) - self.af = activation_class() - - def forward(self, x): - x_p = self.bn(self.proj(x)) if self.proj else x - return self.af(x_p + self.f(x)) - - -class BottleneckTransform(nn.Module): - """Bottleneck transformation: 1x1, 3x3 [+SE], 1x1.""" - - def __init__(self, w_in, w_out, stride, norm, activation_class, params): - super().__init__() - w_b = int(round(w_out * params["bot_mul"])) - w_se = int(round(w_in * params["se_r"])) - groups = w_b // params["group_w"] - self.a = conv2d(w_in, w_b, 1) - self.a_bn = get_norm(norm, w_b) - self.a_af = activation_class() - self.b = conv2d(w_b, w_b, 3, stride=stride, groups=groups) - self.b_bn = get_norm(norm, w_b) - self.b_af = activation_class() - self.se = SE(w_b, w_se, activation_class) if w_se else None - self.c = conv2d(w_b, w_out, 1) - self.c_bn = get_norm(norm, w_out) - self.c_bn.final_bn = True - - def forward(self, x): - for layer in self.children(): - x = layer(x) - return x - - -class ResBottleneckBlock(CNNBlockBase): - """Residual bottleneck block: x + f(x), f = bottleneck transform.""" - - def __init__(self, w_in, w_out, stride, norm, activation_class, params): - super().__init__(w_in, w_out, stride) - self.proj, self.bn = None, None - if (w_in != w_out) or (stride != 1): - self.proj = conv2d(w_in, w_out, 1, stride=stride) - self.bn = get_norm(norm, w_out) - self.f = BottleneckTransform(w_in, w_out, stride, norm, activation_class, params) - self.af = activation_class() - - def forward(self, x): - x_p = self.bn(self.proj(x)) if self.proj else x - return self.af(x_p + self.f(x)) - - -class AnyStage(nn.Module): - """AnyNet stage (sequence of blocks w/ the same output shape).""" - - def __init__(self, w_in, w_out, stride, d, block_class, norm, activation_class, params): - super().__init__() - for i in range(d): - block = block_class(w_in, w_out, stride, norm, activation_class, params) - self.add_module("b{}".format(i + 1), block) - stride, w_in = 1, w_out - - def forward(self, x): - for block in self.children(): - x = block(x) - return x - - -class AnyNet(Backbone): - """AnyNet model. See :paper:`dds`.""" - - def __init__( - self, - *, - stem_class, - stem_width, - block_class, - depths, - widths, - group_widths, - strides, - bottleneck_ratios, - se_ratio, - activation_class, - freeze_at=0, - norm="BN", - out_features=None, - ): - """ - Args: - stem_class (callable): A callable taking 4 arguments (channels in, channels out, - normalization, callable returning an activation function) that returns another - callable implementing the stem module. - stem_width (int): The number of output channels that the stem produces. - block_class (callable): A callable taking 6 arguments (channels in, channels out, - stride, normalization, callable returning an activation function, a dict of - block-specific parameters) that returns another callable implementing the repeated - block module. - depths (list[int]): Number of blocks in each stage. - widths (list[int]): For each stage, the number of output channels of each block. - group_widths (list[int]): For each stage, the number of channels per group in group - convolution, if the block uses group convolution. - strides (list[int]): The stride that each network stage applies to its input. - bottleneck_ratios (list[float]): For each stage, the ratio of the number of bottleneck - channels to the number of block input channels (or, equivalently, output channels), - if the block uses a bottleneck. - se_ratio (float): The ratio of the number of channels used inside the squeeze-excitation - (SE) module to it number of input channels, if SE the block uses SE. - activation_class (callable): A callable taking no arguments that returns another - callable implementing an activation function. - freeze_at (int): The number of stages at the beginning to freeze. - see :meth:`freeze` for detailed explanation. - norm (str or callable): normalization for all conv layers. - See :func:`layers.get_norm` for supported format. - out_features (list[str]): name of the layers whose outputs should - be returned in forward. RegNet's use "stem" and "s1", "s2", etc for the stages after - the stem. If None, will return the output of the last layer. - """ - super().__init__() - self.stem = stem_class(3, stem_width, norm, activation_class) - - current_stride = self.stem.stride - self._out_feature_strides = {"stem": current_stride} - self._out_feature_channels = {"stem": self.stem.out_channels} - self.stages_and_names = [] - prev_w = stem_width - - for i, (d, w, s, b, g) in enumerate( - zip(depths, widths, strides, bottleneck_ratios, group_widths) - ): - params = {"bot_mul": b, "group_w": g, "se_r": se_ratio} - stage = AnyStage(prev_w, w, s, d, block_class, norm, activation_class, params) - name = "s{}".format(i + 1) - self.add_module(name, stage) - self.stages_and_names.append((stage, name)) - self._out_feature_strides[name] = current_stride = int( - current_stride * np.prod([k.stride for k in stage.children()]) - ) - self._out_feature_channels[name] = list(stage.children())[-1].out_channels - prev_w = w - - self.apply(init_weights) - - if out_features is None: - out_features = [name] - self._out_features = out_features - assert len(self._out_features) - children = [x[0] for x in self.named_children()] - for out_feature in self._out_features: - assert out_feature in children, "Available children: {} does not include {}".format( - ", ".join(children), out_feature - ) - self.freeze(freeze_at) - - def forward(self, x): - """ - Args: - x: Tensor of shape (N,C,H,W). H, W must be a multiple of ``self.size_divisibility``. - - Returns: - dict[str->Tensor]: names and the corresponding features - """ - assert x.dim() == 4, f"Model takes an input of shape (N, C, H, W). Got {x.shape} instead!" - outputs = {} - x = self.stem(x) - if "stem" in self._out_features: - outputs["stem"] = x - for stage, name in self.stages_and_names: - x = stage(x) - if name in self._out_features: - outputs[name] = x - return outputs - - def output_shape(self): - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] - ) - for name in self._out_features - } - - def freeze(self, freeze_at=0): - """ - Freeze the first several stages of the model. Commonly used in fine-tuning. - - Layers that produce the same feature map spatial size are defined as one - "stage" by :paper:`FPN`. - - Args: - freeze_at (int): number of stages to freeze. - `1` means freezing the stem. `2` means freezing the stem and - one residual stage, etc. - - Returns: - nn.Module: this model itself - """ - if freeze_at >= 1: - self.stem.freeze() - for idx, (stage, _) in enumerate(self.stages_and_names, start=2): - if freeze_at >= idx: - for block in stage.children(): - block.freeze() - return self - - -def adjust_block_compatibility(ws, bs, gs): - """Adjusts the compatibility of widths, bottlenecks, and groups.""" - assert len(ws) == len(bs) == len(gs) - assert all(w > 0 and b > 0 and g > 0 for w, b, g in zip(ws, bs, gs)) - vs = [int(max(1, w * b)) for w, b in zip(ws, bs)] - gs = [int(min(g, v)) for g, v in zip(gs, vs)] - ms = [np.lcm(g, b) if b > 1 else g for g, b in zip(gs, bs)] - vs = [max(m, int(round(v / m) * m)) for v, m in zip(vs, ms)] - ws = [int(v / b) for v, b in zip(vs, bs)] - assert all(w * b % g == 0 for w, b, g in zip(ws, bs, gs)) - return ws, bs, gs - - -def generate_regnet_parameters(w_a, w_0, w_m, d, q=8): - """Generates per stage widths and depths from RegNet parameters.""" - assert w_a >= 0 and w_0 > 0 and w_m > 1 and w_0 % q == 0 - # Generate continuous per-block ws - ws_cont = np.arange(d) * w_a + w_0 - # Generate quantized per-block ws - ks = np.round(np.log(ws_cont / w_0) / np.log(w_m)) - ws_all = w_0 * np.power(w_m, ks) - ws_all = np.round(np.divide(ws_all, q)).astype(int) * q - # Generate per stage ws and ds (assumes ws_all are sorted) - ws, ds = np.unique(ws_all, return_counts=True) - # Compute number of actual stages and total possible stages - num_stages, total_stages = len(ws), ks.max() + 1 - # Convert numpy arrays to lists and return - ws, ds, ws_all, ws_cont = (x.tolist() for x in (ws, ds, ws_all, ws_cont)) - return ws, ds, num_stages, total_stages, ws_all, ws_cont - - -class RegNet(AnyNet): - """RegNet model. See :paper:`dds`.""" - - def __init__( - self, - *, - stem_class, - stem_width, - block_class, - depth, - w_a, - w_0, - w_m, - group_width, - stride=2, - bottleneck_ratio=1.0, - se_ratio=0.0, - activation_class=None, - freeze_at=0, - norm="BN", - out_features=None, - ): - """ - Build a RegNet from the parameterization described in :paper:`dds` Section 3.3. - - Args: - See :class:`AnyNet` for arguments that are not listed here. - depth (int): Total number of blocks in the RegNet. - w_a (float): Factor by which block width would increase prior to quantizing block widths - by stage. See :paper:`dds` Section 3.3. - w_0 (int): Initial block width. See :paper:`dds` Section 3.3. - w_m (float): Parameter controlling block width quantization. - See :paper:`dds` Section 3.3. - group_width (int): Number of channels per group in group convolution, if the block uses - group convolution. - bottleneck_ratio (float): The ratio of the number of bottleneck channels to the number - of block input channels (or, equivalently, output channels), if the block uses a - bottleneck. - stride (int): The stride that each network stage applies to its input. - """ - ws, ds = generate_regnet_parameters(w_a, w_0, w_m, depth)[0:2] - ss = [stride for _ in ws] - bs = [bottleneck_ratio for _ in ws] - gs = [group_width for _ in ws] - ws, bs, gs = adjust_block_compatibility(ws, bs, gs) - - def default_activation_class(): - return nn.ReLU(inplace=True) - - super().__init__( - stem_class=stem_class, - stem_width=stem_width, - block_class=block_class, - depths=ds, - widths=ws, - strides=ss, - group_widths=gs, - bottleneck_ratios=bs, - se_ratio=se_ratio, - activation_class=default_activation_class - if activation_class is None - else activation_class, - freeze_at=freeze_at, - norm=norm, - out_features=out_features, - ) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/inpainting_src/ldm_inpainting/ldm/modules/attention.py b/spaces/OpenGVLab/InternGPT/iGPT/models/inpainting_src/ldm_inpainting/ldm/modules/attention.py deleted file mode 100644 index f4eff39ccb6d75daa764f6eb70a7cef024fb5a3f..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/inpainting_src/ldm_inpainting/ldm/modules/attention.py +++ /dev/null @@ -1,261 +0,0 @@ -from inspect import isfunction -import math -import torch -import torch.nn.functional as F -from torch import nn, einsum -from einops import rearrange, repeat - -from ldm.modules.diffusionmodules.util import checkpoint - - -def exists(val): - return val is not None - - -def uniq(arr): - return{el: True for el in arr}.keys() - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def max_neg_value(t): - return -torch.finfo(t.dtype).max - - -def init_(tensor): - dim = tensor.shape[-1] - std = 1 / math.sqrt(dim) - tensor.uniform_(-std, std) - return tensor - - -# feedforward -class GEGLU(nn.Module): - def __init__(self, dim_in, dim_out): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * F.gelu(gate) - - -class FeedForward(nn.Module): - def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - project_in = nn.Sequential( - nn.Linear(dim, inner_dim), - nn.GELU() - ) if not glu else GEGLU(dim, inner_dim) - - self.net = nn.Sequential( - project_in, - nn.Dropout(dropout), - nn.Linear(inner_dim, dim_out) - ) - - def forward(self, x): - return self.net(x) - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def Normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -class LinearAttention(nn.Module): - def __init__(self, dim, heads=4, dim_head=32): - super().__init__() - self.heads = heads - hidden_dim = dim_head * heads - self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias = False) - self.to_out = nn.Conv2d(hidden_dim, dim, 1) - - def forward(self, x): - b, c, h, w = x.shape - qkv = self.to_qkv(x) - q, k, v = rearrange(qkv, 'b (qkv heads c) h w -> qkv b heads c (h w)', heads = self.heads, qkv=3) - k = k.softmax(dim=-1) - context = torch.einsum('bhdn,bhen->bhde', k, v) - out = torch.einsum('bhde,bhdn->bhen', context, q) - out = rearrange(out, 'b heads c (h w) -> b (heads c) h w', heads=self.heads, h=h, w=w) - return self.to_out(out) - - -class SpatialSelfAttention(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = rearrange(q, 'b c h w -> b (h w) c') - k = rearrange(k, 'b c h w -> b c (h w)') - w_ = torch.einsum('bij,bjk->bik', q, k) - - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = rearrange(v, 'b c h w -> b c (h w)') - w_ = rearrange(w_, 'b i j -> b j i') - h_ = torch.einsum('bij,bjk->bik', v, w_) - h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h) - h_ = self.proj_out(h_) - - return x+h_ - - -class CrossAttention(nn.Module): - def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.): - super().__init__() - inner_dim = dim_head * heads - context_dim = default(context_dim, query_dim) - - self.scale = dim_head ** -0.5 - self.heads = heads - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, query_dim), - nn.Dropout(dropout) - ) - - def forward(self, x, context=None, mask=None): - h = self.heads - - q = self.to_q(x) - context = default(context, x) - k = self.to_k(context) - v = self.to_v(context) - - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) - - sim = einsum('b i d, b j d -> b i j', q, k) * self.scale - - if exists(mask): - mask = rearrange(mask, 'b ... -> b (...)') - max_neg_value = -torch.finfo(sim.dtype).max - mask = repeat(mask, 'b j -> (b h) () j', h=h) - sim.masked_fill_(~mask, max_neg_value) - - # attention, what we cannot get enough of - attn = sim.softmax(dim=-1) - - out = einsum('b i j, b j d -> b i d', attn, v) - out = rearrange(out, '(b h) n d -> b n (h d)', h=h) - return self.to_out(out) - - -class BasicTransformerBlock(nn.Module): - def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True): - super().__init__() - self.attn1 = CrossAttention(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout) # is a self-attention - self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) - self.attn2 = CrossAttention(query_dim=dim, context_dim=context_dim, - heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none - self.norm1 = nn.LayerNorm(dim) - self.norm2 = nn.LayerNorm(dim) - self.norm3 = nn.LayerNorm(dim) - self.checkpoint = checkpoint - - def forward(self, x, context=None): - return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) - - def _forward(self, x, context=None): - x = self.attn1(self.norm1(x)) + x - x = self.attn2(self.norm2(x), context=context) + x - x = self.ff(self.norm3(x)) + x - return x - - -class SpatialTransformer(nn.Module): - """ - Transformer block for image-like data. - First, project the input (aka embedding) - and reshape to b, t, d. - Then apply standard transformer action. - Finally, reshape to image - """ - def __init__(self, in_channels, n_heads, d_head, - depth=1, dropout=0., context_dim=None): - super().__init__() - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.norm = Normalize(in_channels) - - self.proj_in = nn.Conv2d(in_channels, - inner_dim, - kernel_size=1, - stride=1, - padding=0) - - self.transformer_blocks = nn.ModuleList( - [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim) - for d in range(depth)] - ) - - self.proj_out = zero_module(nn.Conv2d(inner_dim, - in_channels, - kernel_size=1, - stride=1, - padding=0)) - - def forward(self, x, context=None): - # note: if no context is given, cross-attention defaults to self-attention - b, c, h, w = x.shape - x_in = x - x = self.norm(x) - x = self.proj_in(x) - x = rearrange(x, 'b c h w -> b (h w) c') - for block in self.transformer_blocks: - x = block(x, context=context) - x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w) - x = self.proj_out(x) - return x + x_in \ No newline at end of file diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/utils/sync_bn.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/utils/sync_bn.py deleted file mode 100644 index f78f39181d75bb85c53e8c7c8eaf45690e9f0bee..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/utils/sync_bn.py +++ /dev/null @@ -1,59 +0,0 @@ -import torch - -import annotator.uniformer.mmcv as mmcv - - -class _BatchNormXd(torch.nn.modules.batchnorm._BatchNorm): - """A general BatchNorm layer without input dimension check. - - Reproduced from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc - is `_check_input_dim` that is designed for tensor sanity checks. - The check has been bypassed in this class for the convenience of converting - SyncBatchNorm. - """ - - def _check_input_dim(self, input): - return - - -def revert_sync_batchnorm(module): - """Helper function to convert all `SyncBatchNorm` (SyncBN) and - `mmcv.ops.sync_bn.SyncBatchNorm`(MMSyncBN) layers in the model to - `BatchNormXd` layers. - - Adapted from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - - Args: - module (nn.Module): The module containing `SyncBatchNorm` layers. - - Returns: - module_output: The converted module with `BatchNormXd` layers. - """ - module_output = module - module_checklist = [torch.nn.modules.batchnorm.SyncBatchNorm] - if hasattr(mmcv, 'ops'): - module_checklist.append(mmcv.ops.SyncBatchNorm) - if isinstance(module, tuple(module_checklist)): - module_output = _BatchNormXd(module.num_features, module.eps, - module.momentum, module.affine, - module.track_running_stats) - if module.affine: - # no_grad() may not be needed here but - # just to be consistent with `convert_sync_batchnorm()` - with torch.no_grad(): - module_output.weight = module.weight - module_output.bias = module.bias - module_output.running_mean = module.running_mean - module_output.running_var = module.running_var - module_output.num_batches_tracked = module.num_batches_tracked - module_output.training = module.training - # qconfig exists in quantized models - if hasattr(module, 'qconfig'): - module_output.qconfig = module.qconfig - for name, child in module.named_children(): - module_output.add_module(name, revert_sync_batchnorm(child)) - del module - return module_output diff --git a/spaces/PantOfLuck/my_stable_diffusion_webui/style.css b/spaces/PantOfLuck/my_stable_diffusion_webui/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/PantOfLuck/my_stable_diffusion_webui/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/ecmascript/parse.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/ecmascript/parse.go deleted file mode 100644 index 2315876e09aabe73c45c220240d7cc0586e0b777..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/ecmascript/parse.go and /dev/null differ diff --git a/spaces/PeepDaSlan9/Gryphe-MythoMax-L2-13b/app.py b/spaces/PeepDaSlan9/Gryphe-MythoMax-L2-13b/app.py deleted file mode 100644 index ebcbf882e3770b8e61fa5e411b6c232190e8f657..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/Gryphe-MythoMax-L2-13b/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Gryphe/MythoMax-L2-13b").launch() \ No newline at end of file diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/configs/_base_/datasets/ade20k.py b/spaces/Pie31415/control-animation/annotator/uniformer/configs/_base_/datasets/ade20k.py deleted file mode 100644 index efc8b4bb20c981f3db6df7eb52b3dc0744c94cc0..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/configs/_base_/datasets/ade20k.py +++ /dev/null @@ -1,54 +0,0 @@ -# dataset settings -dataset_type = 'ADE20KDataset' -data_root = 'data/ade/ADEChallengeData2016' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -crop_size = (512, 512) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', reduce_zero_label=True), - dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(2048, 512), - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/training', - ann_dir='annotations/training', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline)) diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/ball_query.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/ball_query.py deleted file mode 100644 index d0466847c6e5c1239e359a0397568413ebc1504a..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/ball_query.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['ball_query_forward']) - - -class BallQuery(Function): - """Find nearby points in spherical space.""" - - @staticmethod - def forward(ctx, min_radius: float, max_radius: float, sample_num: int, - xyz: torch.Tensor, center_xyz: torch.Tensor) -> torch.Tensor: - """ - Args: - min_radius (float): minimum radius of the balls. - max_radius (float): maximum radius of the balls. - sample_num (int): maximum number of features in the balls. - xyz (Tensor): (B, N, 3) xyz coordinates of the features. - center_xyz (Tensor): (B, npoint, 3) centers of the ball query. - - Returns: - Tensor: (B, npoint, nsample) tensor with the indices of - the features that form the query balls. - """ - assert center_xyz.is_contiguous() - assert xyz.is_contiguous() - assert min_radius < max_radius - - B, N, _ = xyz.size() - npoint = center_xyz.size(1) - idx = xyz.new_zeros(B, npoint, sample_num, dtype=torch.int) - - ext_module.ball_query_forward( - center_xyz, - xyz, - idx, - b=B, - n=N, - m=npoint, - min_radius=min_radius, - max_radius=max_radius, - nsample=sample_num) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(idx) - return idx - - @staticmethod - def backward(ctx, a=None): - return None, None, None, None - - -ball_query = BallQuery.apply diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/decode_heads/sep_fcn_head.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/decode_heads/sep_fcn_head.py deleted file mode 100644 index a0986143fa4f2bd36f5271354fe5f843f35b9e6f..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/decode_heads/sep_fcn_head.py +++ /dev/null @@ -1,51 +0,0 @@ -from annotator.uniformer.mmcv.cnn import DepthwiseSeparableConvModule - -from ..builder import HEADS -from .fcn_head import FCNHead - - -@HEADS.register_module() -class DepthwiseSeparableFCNHead(FCNHead): - """Depthwise-Separable Fully Convolutional Network for Semantic - Segmentation. - - This head is implemented according to Fast-SCNN paper. - Args: - in_channels(int): Number of output channels of FFM. - channels(int): Number of middle-stage channels in the decode head. - concat_input(bool): Whether to concatenate original decode input into - the result of several consecutive convolution layers. - Default: True. - num_classes(int): Used to determine the dimension of - final prediction tensor. - in_index(int): Correspond with 'out_indices' in FastSCNN backbone. - norm_cfg (dict | None): Config of norm layers. - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - loss_decode(dict): Config of loss type and some - relevant additional options. - """ - - def __init__(self, **kwargs): - super(DepthwiseSeparableFCNHead, self).__init__(**kwargs) - self.convs[0] = DepthwiseSeparableConvModule( - self.in_channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) - for i in range(1, self.num_convs): - self.convs[i] = DepthwiseSeparableConvModule( - self.channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) - - if self.concat_input: - self.conv_cat = DepthwiseSeparableConvModule( - self.in_channels + self.channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) diff --git a/spaces/Purple11/Grounded-Diffusion/ldm/models/seg_module.py b/spaces/Purple11/Grounded-Diffusion/ldm/models/seg_module.py deleted file mode 100644 index 776a0f3a5d4d3d1176b50f0098b0e5ef9e1f7d50..0000000000000000000000000000000000000000 --- a/spaces/Purple11/Grounded-Diffusion/ldm/models/seg_module.py +++ /dev/null @@ -1,471 +0,0 @@ - -from functools import partial -import math -from typing import Iterable -from black import diff -from torch import nn, einsum -import numpy as np -import torch as th -import torch.nn as nn -import functools -import torch.nn.functional as F - -import math -import torch -import torch.nn.functional as F -from torch import nn, Tensor -from einops import rearrange -import copy -from torchvision import transforms -from torchvision.transforms import InterpolationMode -class MLP(nn.Module): - """Very simple multi-layer perceptron (also called FFN)""" - - def __init__(self, input_dim, hidden_dim, output_dim, num_layers): - super().__init__() - self.num_layers = num_layers - h = [hidden_dim] * (num_layers - 1) - self.layers = nn.ModuleList( - nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]) - ) - - def forward(self, x): - for i, layer in enumerate(self.layers): - x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x) - return x -def resize_fn(img, size): - return transforms.Resize(size, InterpolationMode.BICUBIC)( - transforms.ToPILImage()(img)) -import math -def _get_clones(module, N): - return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) - - -def _get_activation_fn(activation): - """Return an activation function given a string""" - if activation == "relu": - return F.relu - if activation == "gelu": - return F.gelu - if activation == "glu": - return F.glu - raise RuntimeError(F"activation should be relu/gelu, not {activation}.") - -class TransformerDecoder(nn.Module): - - def __init__(self, decoder_layer, num_layers): - super().__init__() - self.layers = _get_clones(decoder_layer, num_layers) - self.num_layers = num_layers - - def forward(self, tgt, memory, pos = None, query_pos = None): - output = tgt - - for layer in self.layers: - output = layer(output, memory, pos=pos, query_pos=query_pos) - - return output - - -class TransformerDecoderLayer(nn.Module): - - def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1, no_norm = False, - activation="relu"): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout, bias=False) - self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout, bias=False) - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm1 = nn.LayerNorm(d_model) if not no_norm else nn.Identity() - self.norm2 = nn.LayerNorm(d_model) if not no_norm else nn.Identity() - self.norm3 = nn.LayerNorm(d_model) if not no_norm else nn.Identity() - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - self.dropout3 = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - - def with_pos_embed(self, tensor, pos): - return tensor if pos is None else tensor + pos - - def forward(self, tgt, memory, pos = None, query_pos = None): - tgt2 = self.norm1(tgt) - q = k = self.with_pos_embed(tgt2, query_pos) - # print('q:',q.shape) - tgt2 = self.self_attn(q, k, value=tgt2)[0] - - tgt = tgt + self.dropout1(tgt2) - tgt2 = self.norm2(tgt) - tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt2, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory)[0] - tgt = tgt + self.dropout2(tgt2) - tgt2 = self.norm3(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2)))) - tgt = tgt + self.dropout3(tgt2) - return tgt -# Projection of x onto y -def proj(x, y): - return torch.mm(y, x.t()) * y / torch.mm(y, y.t()) - -# Orthogonalize x wrt list of vectors ys -def gram_schmidt(x, ys): - for y in ys: - x = x - proj(x, y) - return x -def power_iteration(W, u_, update=True, eps=1e-12): - # Lists holding singular vectors and values - us, vs, svs = [], [], [] - for i, u in enumerate(u_): - # Run one step of the power iteration - with torch.no_grad(): - v = torch.matmul(u, W) - # Run Gram-Schmidt to subtract components of all other singular vectors - v = F.normalize(gram_schmidt(v, vs), eps=eps) - # Add to the list - vs += [v] - # Update the other singular vector - u = torch.matmul(v, W.t()) - # Run Gram-Schmidt to subtract components of all other singular vectors - u = F.normalize(gram_schmidt(u, us), eps=eps) - # Add to the list - us += [u] - if update: - u_[i][:] = u - # Compute this singular value and add it to the list - svs += [torch.squeeze(torch.matmul(torch.matmul(v, W.t()), u.t()))] - #svs += [torch.sum(F.linear(u, W.transpose(0, 1)) * v)] - return svs, us, vs - -# Spectral normalization base class -class SN(object): - def __init__(self, num_svs, num_itrs, num_outputs, transpose=False, eps=1e-12): - # Number of power iterations per step - self.num_itrs = num_itrs - # Number of singular values - self.num_svs = num_svs - # Transposed? - self.transpose = transpose - # Epsilon value for avoiding divide-by-0 - self.eps = eps - # Register a singular vector for each sv - for i in range(self.num_svs): - self.register_buffer('u%d' % i, torch.randn(1, num_outputs)) - self.register_buffer('sv%d' % i, torch.ones(1)) - - # Singular vectors (u side) - @property - def u(self): - return [getattr(self, 'u%d' % i) for i in range(self.num_svs)] - - # Singular values; - # note that these buffers are just for logging and are not used in training. - @property - def sv(self): - return [getattr(self, 'sv%d' % i) for i in range(self.num_svs)] - - # Compute the spectrally-normalized weight - def W_(self): - W_mat = self.weight.view(self.weight.size(0), -1) - if self.transpose: - W_mat = W_mat.t() - # Apply num_itrs power iterations - for _ in range(self.num_itrs): - svs, us, vs = power_iteration(W_mat, self.u, update=self.training, eps=self.eps) - # Update the svs - if self.training: - with torch.no_grad(): # Make sure to do this in a no_grad() context or you'll get memory leaks! - for i, sv in enumerate(svs): - self.sv[i][:] = sv - return self.weight / svs[0] - -# Linear layer with spectral norm -class SNLinear(nn.Linear, SN): - def __init__(self, in_features, out_features, bias=True, - num_svs=1, num_itrs=1, eps=1e-12): - nn.Linear.__init__(self, in_features, out_features, bias) - SN.__init__(self, num_svs, num_itrs, out_features, eps=eps) - def forward(self, x): - return F.linear(x, self.W_(), self.bias) - -# 2D Conv layer with spectral norm -class SNConv2d(nn.Conv2d, SN): - def __init__(self, in_channels, out_channels, kernel_size, stride=1, - padding=0, dilation=1, groups=1, bias=True, - num_svs=1, num_itrs=1, eps=1e-12): - nn.Conv2d.__init__(self, in_channels, out_channels, kernel_size, stride, - padding, dilation, groups, bias) - SN.__init__(self, num_svs, num_itrs, out_channels, eps=eps) - def forward(self, x): - return F.conv2d(x, self.W_(), self.bias, self.stride, - self.padding, self.dilation, self.groups) - -class SegBlock(nn.Module): - def __init__(self, in_channels, out_channels, con_channels, - which_conv=nn.Conv2d, which_linear=None, activation=None, - upsample=None): - super(SegBlock, self).__init__() - - self.in_channels, self.out_channels = in_channels, out_channels - self.which_conv, self.which_linear = which_conv, which_linear - self.activation = activation - self.upsample = upsample - - self.conv1 = self.which_conv(self.in_channels, self.out_channels) - self.conv2 = self.which_conv(self.out_channels, self.out_channels) - self.learnable_sc = in_channels != out_channels or upsample - if self.learnable_sc: - self.conv_sc = self.which_conv(in_channels, out_channels, - kernel_size=1, padding=0) - - self.register_buffer('stored_mean1', torch.zeros(in_channels)) - self.register_buffer('stored_var1', torch.ones(in_channels)) - self.register_buffer('stored_mean2', torch.zeros(out_channels)) - self.register_buffer('stored_var2', torch.ones(out_channels)) - - self.upsample = upsample - - def forward(self, x, y=None): - x = F.batch_norm(x, self.stored_mean1, self.stored_var1, None, None, - self.training, 0.1, 1e-4) - h = self.activation(x) - if self.upsample: - h = self.upsample(h) - x = self.upsample(x) - h = self.conv1(h) - h = F.batch_norm(h, self.stored_mean2, self.stored_var2, None, None, - self.training, 0.1, 1e-4) - - h = self.activation(h) - h = self.conv2(h) - if self.learnable_sc: - x = self.conv_sc(x) - return h + x - -def make_coord(shape, ranges=None, flatten=True): - """ Make coordinates at grid centers. - """ - coord_seqs = [] - for i, n in enumerate(shape): - - if ranges is None: - v0, v1 = -1, 1 - else: - v0, v1 = ranges[i] - r = (v1 - v0) / (2 * n) - seq = v0 + r + (2 * r) * torch.arange(n).float() - coord_seqs.append(seq) - ret = torch.stack(torch.meshgrid(*coord_seqs), dim=-1) - if flatten: - ret = ret.view(-1, ret.shape[-1]) - return ret - -class Embedder: - def __init__(self, **kwargs): - self.kwargs = kwargs - self.create_embedding_fn() - - def create_embedding_fn(self): - embed_fns = [] - d = self.kwargs['input_dims'] - out_dim = 0 - if self.kwargs['include_input']: - embed_fns.append(lambda x : x) - out_dim += d - - max_freq = self.kwargs['max_freq_log2'] - N_freqs = self.kwargs['num_freqs'] - - if self.kwargs['log_sampling']: - - freq_bands = 2.**torch.linspace(0., max_freq, steps=N_freqs).double() - else: - freq_bands = torch.linspace(2.**0., 2.**max_freq, steps=N_freqs) - - for freq in freq_bands: - for p_fn in self.kwargs['periodic_fns']: - - embed_fns.append(lambda x, p_fn=p_fn, freq=freq : p_fn(x.double() * freq)) - out_dim += d - - self.embed_fns = embed_fns - self.out_dim = out_dim - - def embed(self, inputs): - return torch.cat([fn(inputs) for fn in self.embed_fns], -1) - -def get_embedder(multires, i=0): - - if i == -1: - return nn.Identity(), 3 - - embed_kwargs = { - 'include_input' : False, - 'input_dims' : 2, - 'max_freq_log2' : multires-1, - 'num_freqs' : multires, - 'log_sampling' : True, - 'periodic_fns' : [torch.sin, torch.cos], - } - - embedder_obj = Embedder(**embed_kwargs) - embed = lambda x, eo=embedder_obj : eo.embed(x) - return embed, embedder_obj.out_dim - -class Segmodule(nn.Module): - - def __init__(self, - embedding_dim=512, - num_heads=8, - num_layers=3, - hidden_dim=2048, - dropout_rate=0): - super().__init__() - - low_feature_channel = 16 - mid_feature_channel = 32 - high_feature_channel = 64 - highest_feature_channel=128 - - self.low_feature_conv = nn.Sequential( - nn.Conv2d(1280*6*2, low_feature_channel, kernel_size=1, bias=False), - - ) - self.mid_feature_conv = nn.Sequential( - nn.Conv2d((1280*5+640)*2, mid_feature_channel, kernel_size=1, bias=False), - - ) - self.mid_feature_mix_conv = SegBlock( - in_channels=low_feature_channel+mid_feature_channel, - out_channels=low_feature_channel+mid_feature_channel, - con_channels=128, - which_conv=functools.partial(SNConv2d, - kernel_size=3, padding=1, - num_svs=1, num_itrs=1, - eps=1e-04), - which_linear=functools.partial(SNLinear, - num_svs=1, num_itrs=1, - eps=1e-04), - activation=nn.ReLU(inplace=True), - upsample=False, - ) - self.high_feature_conv = nn.Sequential( - nn.Conv2d((1280+640*4+320)*2, high_feature_channel, kernel_size=1, bias=False), - ) - self.high_feature_mix_conv = SegBlock( - in_channels=low_feature_channel+mid_feature_channel+high_feature_channel, - out_channels=low_feature_channel+mid_feature_channel+high_feature_channel, - con_channels=128, - which_conv=functools.partial(SNConv2d, - kernel_size=3, padding=1, - num_svs=1, num_itrs=1, - eps=1e-04), - which_linear=functools.partial(SNLinear, - num_svs=1, num_itrs=1, - eps=1e-04), - activation=nn.ReLU(inplace=True), - upsample=False, - ) - self.highest_feature_conv = nn.Sequential( - nn.Conv2d((640+320*6)*2, highest_feature_channel, kernel_size=1, bias=False), - ) - self.highest_feature_mix_conv = SegBlock( - in_channels=low_feature_channel+mid_feature_channel+high_feature_channel+highest_feature_channel, - out_channels=low_feature_channel+mid_feature_channel+high_feature_channel+highest_feature_channel, - con_channels=128, - which_conv=functools.partial(SNConv2d, - kernel_size=3, padding=1, - num_svs=1, num_itrs=1, - eps=1e-04), - which_linear=functools.partial(SNLinear, - num_svs=1, num_itrs=1, - eps=1e-04), - activation=nn.ReLU(inplace=True), - upsample=False, - ) - - feature_dim=low_feature_channel+mid_feature_channel+high_feature_channel+highest_feature_channel - query_dim=feature_dim*16 - decoder_layer = TransformerDecoderLayer(embedding_dim, num_heads, hidden_dim, dropout_rate) - self.transfromer_decoder = TransformerDecoder(decoder_layer, num_layers) - self.mlp = MLP(embedding_dim, embedding_dim, feature_dim, 3) - context_dim=768 - - self.to_k = nn.Linear(query_dim, embedding_dim, bias=False) - self.to_q = nn.Linear(context_dim, embedding_dim, bias=False) - - def forward(self,diffusion_feature,text_embedding): - - image_feature=self._prepare_features(diffusion_feature) - - final_image_feature=F.interpolate(image_feature, size=512, mode='bilinear', align_corners=False) - b=final_image_feature.size()[0] - - patch_size = 4 - patch_number=int(image_feature.size()[2]/patch_size) - - image_feature = torch.nn.functional.unfold(image_feature, patch_size, stride=patch_size).transpose(1,2).contiguous() - - image_feature=rearrange(image_feature, 'b n d -> (b n) d ') - text_embedding=rearrange(text_embedding, 'b n d -> (b n) d ') - - q = self.to_q(text_embedding) - k = self.to_k(image_feature) - - output_query = self.transfromer_decoder(q, k, None) - - output_query=rearrange(output_query, '(b n) d -> b n d',b=b) - - mask_embedding=self.mlp(output_query) - seg_result=einsum('b d h w, b n d -> b n h w', final_image_feature, mask_embedding) - - return seg_result - def _prepare_features(self, features, upsample='bilinear'): - self.low_feature_size = 16 - self.mid_feature_size = 32 - self.high_feature_size = 64 - - low_features = [ - F.interpolate(i, size=self.low_feature_size, mode=upsample, align_corners=False) for i in features["low"] - ] - low_features = torch.cat(low_features, dim=1) - - mid_features = [ - F.interpolate(i, size=self.mid_feature_size, mode=upsample, align_corners=False) for i in features["mid"] - ] - mid_features = torch.cat(mid_features, dim=1) - - high_features = [ - F.interpolate(i, size=self.high_feature_size, mode=upsample, align_corners=False) for i in features["high"] - ] - high_features = torch.cat(high_features, dim=1) - highest_features=torch.cat(features["highest"],dim=1) - features_dict = { - 'low': low_features, - 'mid': mid_features, - 'high': high_features, - 'highest':highest_features, - } - - - low_feat = self.low_feature_conv(features_dict['low']) - low_feat = F.interpolate(low_feat, size=self.mid_feature_size, mode='bilinear', align_corners=False) - - mid_feat = self.mid_feature_conv(features_dict['mid']) - mid_feat = torch.cat([low_feat, mid_feat], dim=1) - mid_feat = self.mid_feature_mix_conv(mid_feat, y=None) - mid_feat = F.interpolate(mid_feat, size=self.high_feature_size, mode='bilinear', align_corners=False) - - high_feat = self.high_feature_conv(features_dict['high']) - high_feat = torch.cat([mid_feat, high_feat], dim=1) - high_feat = self.high_feature_mix_conv(high_feat, y=None) - - highest_feat=self.highest_feature_conv(features_dict['highest']) - highest_feat=torch.cat([high_feat,highest_feat],dim=1) - highest_feat=self.highest_feature_mix_conv(highest_feat,y=None) - - return highest_feat - diff --git "a/spaces/Qiukai/gpt/crazy_functions/\350\257\273\346\226\207\347\253\240\345\206\231\346\221\230\350\246\201.py" "b/spaces/Qiukai/gpt/crazy_functions/\350\257\273\346\226\207\347\253\240\345\206\231\346\221\230\350\246\201.py" deleted file mode 100644 index b910f5dc7c618ed2c7d847e7870d676f1e2b9a03..0000000000000000000000000000000000000000 --- "a/spaces/Qiukai/gpt/crazy_functions/\350\257\273\346\226\207\347\253\240\345\206\231\346\221\230\350\246\201.py" +++ /dev/null @@ -1,67 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -fast_debug = False - - -def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, glob, os - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8') as f: - file_content = f.read() - - prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else "" - i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, llm_kwargs, chatbot, history=[], sys_prompt=system_prompt) # 带超时倒计时 - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - if not fast_debug: time.sleep(2) - - all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)]) - i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say, llm_kwargs, chatbot, history=history, sys_prompt=system_prompt) # 带超时倒计时 - - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - - - -@CatchException -def 读文章写摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] # + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/RMXK/RVC_HFF/infer/modules/ipex/__init__.py.py b/spaces/RMXK/RVC_HFF/infer/modules/ipex/__init__.py.py deleted file mode 100644 index 9f53b2d3f7025b2d71369dababa4e6f2a4affc48..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/infer/modules/ipex/__init__.py.py +++ /dev/null @@ -1,165 +0,0 @@ -import os -import sys -import contextlib -import torch -import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import -from .hijacks import ipex_hijacks -from .attention import attention_init - -# pylint: disable=protected-access, missing-function-docstring, line-too-long - -def ipex_init(): # pylint: disable=too-many-statements - try: - #Replace cuda with xpu: - torch.cuda.current_device = torch.xpu.current_device - torch.cuda.current_stream = torch.xpu.current_stream - torch.cuda.device = torch.xpu.device - torch.cuda.device_count = torch.xpu.device_count - torch.cuda.device_of = torch.xpu.device_of - torch.cuda.getDeviceIdListForCard = torch.xpu.getDeviceIdListForCard - torch.cuda.get_device_name = torch.xpu.get_device_name - torch.cuda.get_device_properties = torch.xpu.get_device_properties - torch.cuda.init = torch.xpu.init - torch.cuda.is_available = torch.xpu.is_available - torch.cuda.is_initialized = torch.xpu.is_initialized - torch.cuda.is_current_stream_capturing = lambda: False - torch.cuda.set_device = torch.xpu.set_device - torch.cuda.stream = torch.xpu.stream - torch.cuda.synchronize = torch.xpu.synchronize - torch.cuda.Event = torch.xpu.Event - torch.cuda.Stream = torch.xpu.Stream - torch.cuda.FloatTensor = torch.xpu.FloatTensor - torch.Tensor.cuda = torch.Tensor.xpu - torch.Tensor.is_cuda = torch.Tensor.is_xpu - torch.cuda._initialization_lock = torch.xpu.lazy_init._initialization_lock - torch.cuda._initialized = torch.xpu.lazy_init._initialized - torch.cuda._lazy_seed_tracker = torch.xpu.lazy_init._lazy_seed_tracker - torch.cuda._queued_calls = torch.xpu.lazy_init._queued_calls - torch.cuda._tls = torch.xpu.lazy_init._tls - torch.cuda.threading = torch.xpu.lazy_init.threading - torch.cuda.traceback = torch.xpu.lazy_init.traceback - torch.cuda.Optional = torch.xpu.Optional - torch.cuda.__cached__ = torch.xpu.__cached__ - torch.cuda.__loader__ = torch.xpu.__loader__ - torch.cuda.ComplexFloatStorage = torch.xpu.ComplexFloatStorage - torch.cuda.Tuple = torch.xpu.Tuple - torch.cuda.streams = torch.xpu.streams - torch.cuda._lazy_new = torch.xpu._lazy_new - torch.cuda.FloatStorage = torch.xpu.FloatStorage - torch.cuda.Any = torch.xpu.Any - torch.cuda.__doc__ = torch.xpu.__doc__ - torch.cuda.default_generators = torch.xpu.default_generators - torch.cuda.HalfTensor = torch.xpu.HalfTensor - torch.cuda._get_device_index = torch.xpu._get_device_index - torch.cuda.__path__ = torch.xpu.__path__ - torch.cuda.Device = torch.xpu.Device - torch.cuda.IntTensor = torch.xpu.IntTensor - torch.cuda.ByteStorage = torch.xpu.ByteStorage - torch.cuda.set_stream = torch.xpu.set_stream - torch.cuda.BoolStorage = torch.xpu.BoolStorage - torch.cuda.os = torch.xpu.os - torch.cuda.torch = torch.xpu.torch - torch.cuda.BFloat16Storage = torch.xpu.BFloat16Storage - torch.cuda.Union = torch.xpu.Union - torch.cuda.DoubleTensor = torch.xpu.DoubleTensor - torch.cuda.ShortTensor = torch.xpu.ShortTensor - torch.cuda.LongTensor = torch.xpu.LongTensor - torch.cuda.IntStorage = torch.xpu.IntStorage - torch.cuda.LongStorage = torch.xpu.LongStorage - torch.cuda.__annotations__ = torch.xpu.__annotations__ - torch.cuda.__package__ = torch.xpu.__package__ - torch.cuda.__builtins__ = torch.xpu.__builtins__ - torch.cuda.CharTensor = torch.xpu.CharTensor - torch.cuda.List = torch.xpu.List - torch.cuda._lazy_init = torch.xpu._lazy_init - torch.cuda.BFloat16Tensor = torch.xpu.BFloat16Tensor - torch.cuda.DoubleStorage = torch.xpu.DoubleStorage - torch.cuda.ByteTensor = torch.xpu.ByteTensor - torch.cuda.StreamContext = torch.xpu.StreamContext - torch.cuda.ComplexDoubleStorage = torch.xpu.ComplexDoubleStorage - torch.cuda.ShortStorage = torch.xpu.ShortStorage - torch.cuda._lazy_call = torch.xpu._lazy_call - torch.cuda.HalfStorage = torch.xpu.HalfStorage - torch.cuda.random = torch.xpu.random - torch.cuda._device = torch.xpu._device - torch.cuda.classproperty = torch.xpu.classproperty - torch.cuda.__name__ = torch.xpu.__name__ - torch.cuda._device_t = torch.xpu._device_t - torch.cuda.warnings = torch.xpu.warnings - torch.cuda.__spec__ = torch.xpu.__spec__ - torch.cuda.BoolTensor = torch.xpu.BoolTensor - torch.cuda.CharStorage = torch.xpu.CharStorage - torch.cuda.__file__ = torch.xpu.__file__ - torch.cuda._is_in_bad_fork = torch.xpu.lazy_init._is_in_bad_fork - #torch.cuda.is_current_stream_capturing = torch.xpu.is_current_stream_capturing - - #Memory: - torch.cuda.memory = torch.xpu.memory - if 'linux' in sys.platform and "WSL2" in os.popen("uname -a").read(): - torch.xpu.empty_cache = lambda: None - torch.cuda.empty_cache = torch.xpu.empty_cache - torch.cuda.memory_stats = torch.xpu.memory_stats - torch.cuda.memory_summary = torch.xpu.memory_summary - torch.cuda.memory_snapshot = torch.xpu.memory_snapshot - torch.cuda.memory_allocated = torch.xpu.memory_allocated - torch.cuda.max_memory_allocated = torch.xpu.max_memory_allocated - torch.cuda.memory_reserved = torch.xpu.memory_reserved - torch.cuda.memory_cached = torch.xpu.memory_reserved - torch.cuda.max_memory_reserved = torch.xpu.max_memory_reserved - torch.cuda.max_memory_cached = torch.xpu.max_memory_reserved - torch.cuda.reset_peak_memory_stats = torch.xpu.reset_peak_memory_stats - torch.cuda.reset_max_memory_cached = torch.xpu.reset_peak_memory_stats - torch.cuda.reset_max_memory_allocated = torch.xpu.reset_peak_memory_stats - torch.cuda.memory_stats_as_nested_dict = torch.xpu.memory_stats_as_nested_dict - torch.cuda.reset_accumulated_memory_stats = torch.xpu.reset_accumulated_memory_stats - - #RNG: - torch.cuda.get_rng_state = torch.xpu.get_rng_state - torch.cuda.get_rng_state_all = torch.xpu.get_rng_state_all - torch.cuda.set_rng_state = torch.xpu.set_rng_state - torch.cuda.set_rng_state_all = torch.xpu.set_rng_state_all - torch.cuda.manual_seed = torch.xpu.manual_seed - torch.cuda.manual_seed_all = torch.xpu.manual_seed_all - torch.cuda.seed = torch.xpu.seed - torch.cuda.seed_all = torch.xpu.seed_all - torch.cuda.initial_seed = torch.xpu.initial_seed - - #AMP: - torch.cuda.amp = torch.xpu.amp - if not hasattr(torch.cuda.amp, "common"): - torch.cuda.amp.common = contextlib.nullcontext() - torch.cuda.amp.common.amp_definitely_not_available = lambda: False - try: - torch.cuda.amp.GradScaler = torch.xpu.amp.GradScaler - except Exception: # pylint: disable=broad-exception-caught - try: - from .gradscaler import gradscaler_init # pylint: disable=import-outside-toplevel, import-error - gradscaler_init() - torch.cuda.amp.GradScaler = torch.xpu.amp.GradScaler - except Exception: # pylint: disable=broad-exception-caught - torch.cuda.amp.GradScaler = ipex.cpu.autocast._grad_scaler.GradScaler - - #C - torch._C._cuda_getCurrentRawStream = ipex._C._getCurrentStream - ipex._C._DeviceProperties.major = 2023 - ipex._C._DeviceProperties.minor = 2 - - #Fix functions with ipex: - torch.cuda.mem_get_info = lambda device=None: [(torch.xpu.get_device_properties(device).total_memory - torch.xpu.memory_allocated(device)), torch.xpu.get_device_properties(device).total_memory] - torch._utils._get_available_device_type = lambda: "xpu" - torch.has_cuda = True - torch.cuda.has_half = True - torch.cuda.is_bf16_supported = lambda *args, **kwargs: True - torch.cuda.is_fp16_supported = lambda *args, **kwargs: True - torch.version.cuda = "11.7" - torch.cuda.get_device_capability = lambda *args, **kwargs: [11,7] - torch.cuda.get_device_properties.major = 11 - torch.cuda.get_device_properties.minor = 7 - torch.cuda.ipc_collect = lambda *args, **kwargs: None - torch.cuda.utilization = lambda *args, **kwargs: 0 - - ipex_hijacks() - attention_init() - except Exception as e: - return False, e - return True, None \ No newline at end of file diff --git a/spaces/RMXK/RVC_HFF/tools/torchgate/torchgate.py b/spaces/RMXK/RVC_HFF/tools/torchgate/torchgate.py deleted file mode 100644 index 086f2ab38e4ad79e432a51c38ed7e59defae0acd..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/tools/torchgate/torchgate.py +++ /dev/null @@ -1,264 +0,0 @@ -import torch -from torch.nn.functional import conv1d, conv2d -from typing import Union, Optional -from .utils import linspace, temperature_sigmoid, amp_to_db - - -class TorchGate(torch.nn.Module): - """ - A PyTorch module that applies a spectral gate to an input signal. - - Arguments: - sr {int} -- Sample rate of the input signal. - nonstationary {bool} -- Whether to use non-stationary or stationary masking (default: {False}). - n_std_thresh_stationary {float} -- Number of standard deviations above mean to threshold noise for - stationary masking (default: {1.5}). - n_thresh_nonstationary {float} -- Number of multiplies above smoothed magnitude spectrogram. for - non-stationary masking (default: {1.3}). - temp_coeff_nonstationary {float} -- Temperature coefficient for non-stationary masking (default: {0.1}). - n_movemean_nonstationary {int} -- Number of samples for moving average smoothing in non-stationary masking - (default: {20}). - prop_decrease {float} -- Proportion to decrease signal by where the mask is zero (default: {1.0}). - n_fft {int} -- Size of FFT for STFT (default: {1024}). - win_length {[int]} -- Window length for STFT. If None, defaults to `n_fft` (default: {None}). - hop_length {[int]} -- Hop length for STFT. If None, defaults to `win_length` // 4 (default: {None}). - freq_mask_smooth_hz {float} -- Frequency smoothing width for mask (in Hz). If None, no smoothing is applied - (default: {500}). - time_mask_smooth_ms {float} -- Time smoothing width for mask (in ms). If None, no smoothing is applied - (default: {50}). - """ - - @torch.no_grad() - def __init__( - self, - sr: int, - nonstationary: bool = False, - n_std_thresh_stationary: float = 1.5, - n_thresh_nonstationary: float = 1.3, - temp_coeff_nonstationary: float = 0.1, - n_movemean_nonstationary: int = 20, - prop_decrease: float = 1.0, - n_fft: int = 1024, - win_length: bool = None, - hop_length: int = None, - freq_mask_smooth_hz: float = 500, - time_mask_smooth_ms: float = 50, - ): - super().__init__() - - # General Params - self.sr = sr - self.nonstationary = nonstationary - assert 0.0 <= prop_decrease <= 1.0 - self.prop_decrease = prop_decrease - - # STFT Params - self.n_fft = n_fft - self.win_length = self.n_fft if win_length is None else win_length - self.hop_length = self.win_length // 4 if hop_length is None else hop_length - - # Stationary Params - self.n_std_thresh_stationary = n_std_thresh_stationary - - # Non-Stationary Params - self.temp_coeff_nonstationary = temp_coeff_nonstationary - self.n_movemean_nonstationary = n_movemean_nonstationary - self.n_thresh_nonstationary = n_thresh_nonstationary - - # Smooth Mask Params - self.freq_mask_smooth_hz = freq_mask_smooth_hz - self.time_mask_smooth_ms = time_mask_smooth_ms - self.register_buffer("smoothing_filter", self._generate_mask_smoothing_filter()) - - @torch.no_grad() - def _generate_mask_smoothing_filter(self) -> Union[torch.Tensor, None]: - """ - A PyTorch module that applies a spectral gate to an input signal using the STFT. - - Returns: - smoothing_filter (torch.Tensor): a 2D tensor representing the smoothing filter, - with shape (n_grad_freq, n_grad_time), where n_grad_freq is the number of frequency - bins to smooth and n_grad_time is the number of time frames to smooth. - If both self.freq_mask_smooth_hz and self.time_mask_smooth_ms are None, returns None. - """ - if self.freq_mask_smooth_hz is None and self.time_mask_smooth_ms is None: - return None - - n_grad_freq = ( - 1 - if self.freq_mask_smooth_hz is None - else int(self.freq_mask_smooth_hz / (self.sr / (self.n_fft / 2))) - ) - if n_grad_freq < 1: - raise ValueError( - f"freq_mask_smooth_hz needs to be at least {int((self.sr / (self._n_fft / 2)))} Hz" - ) - - n_grad_time = ( - 1 - if self.time_mask_smooth_ms is None - else int(self.time_mask_smooth_ms / ((self.hop_length / self.sr) * 1000)) - ) - if n_grad_time < 1: - raise ValueError( - f"time_mask_smooth_ms needs to be at least {int((self.hop_length / self.sr) * 1000)} ms" - ) - - if n_grad_time == 1 and n_grad_freq == 1: - return None - - v_f = torch.cat( - [ - linspace(0, 1, n_grad_freq + 1, endpoint=False), - linspace(1, 0, n_grad_freq + 2), - ] - )[1:-1] - v_t = torch.cat( - [ - linspace(0, 1, n_grad_time + 1, endpoint=False), - linspace(1, 0, n_grad_time + 2), - ] - )[1:-1] - smoothing_filter = torch.outer(v_f, v_t).unsqueeze(0).unsqueeze(0) - - return smoothing_filter / smoothing_filter.sum() - - @torch.no_grad() - def _stationary_mask( - self, X_db: torch.Tensor, xn: Optional[torch.Tensor] = None - ) -> torch.Tensor: - """ - Computes a stationary binary mask to filter out noise in a log-magnitude spectrogram. - - Arguments: - X_db (torch.Tensor): 2D tensor of shape (frames, freq_bins) containing the log-magnitude spectrogram. - xn (torch.Tensor): 1D tensor containing the audio signal corresponding to X_db. - - Returns: - sig_mask (torch.Tensor): Binary mask of the same shape as X_db, where values greater than the threshold - are set to 1, and the rest are set to 0. - """ - if xn is not None: - XN = torch.stft( - xn, - n_fft=self.n_fft, - hop_length=self.hop_length, - win_length=self.win_length, - return_complex=True, - pad_mode="constant", - center=True, - window=torch.hann_window(self.win_length).to(xn.device), - ) - - XN_db = amp_to_db(XN).to(dtype=X_db.dtype) - else: - XN_db = X_db - - # calculate mean and standard deviation along the frequency axis - std_freq_noise, mean_freq_noise = torch.std_mean(XN_db, dim=-1) - - # compute noise threshold - noise_thresh = mean_freq_noise + std_freq_noise * self.n_std_thresh_stationary - - # create binary mask by thresholding the spectrogram - sig_mask = X_db > noise_thresh.unsqueeze(2) - return sig_mask - - @torch.no_grad() - def _nonstationary_mask(self, X_abs: torch.Tensor) -> torch.Tensor: - """ - Computes a non-stationary binary mask to filter out noise in a log-magnitude spectrogram. - - Arguments: - X_abs (torch.Tensor): 2D tensor of shape (frames, freq_bins) containing the magnitude spectrogram. - - Returns: - sig_mask (torch.Tensor): Binary mask of the same shape as X_abs, where values greater than the threshold - are set to 1, and the rest are set to 0. - """ - X_smoothed = ( - conv1d( - X_abs.reshape(-1, 1, X_abs.shape[-1]), - torch.ones( - self.n_movemean_nonstationary, - dtype=X_abs.dtype, - device=X_abs.device, - ).view(1, 1, -1), - padding="same", - ).view(X_abs.shape) - / self.n_movemean_nonstationary - ) - - # Compute slowness ratio and apply temperature sigmoid - slowness_ratio = (X_abs - X_smoothed) / (X_smoothed + 1e-6) - sig_mask = temperature_sigmoid( - slowness_ratio, self.n_thresh_nonstationary, self.temp_coeff_nonstationary - ) - - return sig_mask - - def forward( - self, x: torch.Tensor, xn: Optional[torch.Tensor] = None - ) -> torch.Tensor: - """ - Apply the proposed algorithm to the input signal. - - Arguments: - x (torch.Tensor): The input audio signal, with shape (batch_size, signal_length). - xn (Optional[torch.Tensor]): The noise signal used for stationary noise reduction. If `None`, the input - signal is used as the noise signal. Default: `None`. - - Returns: - torch.Tensor: The denoised audio signal, with the same shape as the input signal. - """ - assert x.ndim == 2 - if x.shape[-1] < self.win_length * 2: - raise Exception(f"x must be bigger than {self.win_length * 2}") - - assert xn is None or xn.ndim == 1 or xn.ndim == 2 - if xn is not None and xn.shape[-1] < self.win_length * 2: - raise Exception(f"xn must be bigger than {self.win_length * 2}") - - # Compute short-time Fourier transform (STFT) - X = torch.stft( - x, - n_fft=self.n_fft, - hop_length=self.hop_length, - win_length=self.win_length, - return_complex=True, - pad_mode="constant", - center=True, - window=torch.hann_window(self.win_length).to(x.device), - ) - - # Compute signal mask based on stationary or nonstationary assumptions - if self.nonstationary: - sig_mask = self._nonstationary_mask(X.abs()) - else: - sig_mask = self._stationary_mask(amp_to_db(X), xn) - - # Propagate decrease in signal power - sig_mask = self.prop_decrease * (sig_mask * 1.0 - 1.0) + 1.0 - - # Smooth signal mask with 2D convolution - if self.smoothing_filter is not None: - sig_mask = conv2d( - sig_mask.unsqueeze(1), - self.smoothing_filter.to(sig_mask.dtype), - padding="same", - ) - - # Apply signal mask to STFT magnitude and phase components - Y = X * sig_mask.squeeze(1) - - # Inverse STFT to obtain time-domain signal - y = torch.istft( - Y, - n_fft=self.n_fft, - hop_length=self.hop_length, - win_length=self.win_length, - center=True, - window=torch.hann_window(self.win_length).to(Y.device), - ) - - return y.to(dtype=x.dtype) diff --git a/spaces/Ramse/TTS_Hindi/modules/hifigan/model/generator.py b/spaces/Ramse/TTS_Hindi/modules/hifigan/model/generator.py deleted file mode 100644 index ba88f791c54a97b3284a94915fbdb33629502db0..0000000000000000000000000000000000000000 --- a/spaces/Ramse/TTS_Hindi/modules/hifigan/model/generator.py +++ /dev/null @@ -1,408 +0,0 @@ -import torch -from torch import nn -from .mrf import MRF - -''' -class Generator(nn.Module): - -def __init__(self, input_channel=80, hu=512, ku=[16, 16, 4, 4], kr=[3, 7, 11], Dr=[1, 3, 5]): - super(Generator, self).__init__() - generator = [] - generator += [ - nn.ReflectionPad1d(3), - nn.utils.weight_norm(nn.Conv1d(input_channel, hu, kernel_size=7)) - ] - - - - for k in ku: - inp = hu - out = int(inp/2) - generator += [ - nn.LeakyReLU(0.2), - nn.ConvTranspose1d(inp, out, k, k//2), - MRF(kr, out, Dr) - ] - hu = out - - generator += [ - nn.LeakyReLU(0.2), - nn.ReflectionPad1d(3), - nn.utils.weight_norm(nn.Conv1d(hu, 1, kernel_size=7, stride=1)), - nn.Tanh() - ] - self.generator = nn.Sequential(*generator) - - - -def forward(self, x): - x = self.generator(x) - return x - -''' - -class Generator(nn.Module): - - def __init__(self, input_channel=80, hu=512, ku=[16, 16, 4, 4], kr=[3, 7, 11], Dr=[1, 3, 5]): - super(Generator, self).__init__() - self.input = nn.Sequential( - nn.ReflectionPad1d(3), - nn.utils.weight_norm(nn.Conv1d(input_channel, hu, kernel_size=7)) - ) - - generator = [] - - for k in ku: - inp = hu - out = int(inp/2) - generator += [ - nn.LeakyReLU(0.2), - nn.utils.weight_norm(nn.ConvTranspose1d(inp, out, k, k//2)), - MRF(kr, out, Dr) - ] - hu = out - self.generator = nn.Sequential(*generator) - - self.output = nn.Sequential( - nn.LeakyReLU(0.2), - nn.ReflectionPad1d(3), - nn.utils.weight_norm(nn.Conv1d(hu, 1, kernel_size=7, stride=1)), - nn.Tanh() - - ) - - def forward(self, x): - x1 = self.input(x) - x2 = self.generator(x1) - out = self.output(x2) - return out - - def eval(self, inference=False): - super(Generator, self).eval() - - # don't remove weight norm while validation in training loop - if inference: - self.remove_weight_norm() - - # def remove_weight_norm(self): - # for idx, layer in enumerate(self.generator): - # if len(layer.state_dict()) != 0: - # try: - # nn.utils.remove_weight_norm(layer) - # except: - # layer.remove_weight_norm() - - def remove_weight_norm(self): - for idx, layer in enumerate(self.input): - if len(layer.state_dict()) != 0: - try: - nn.utils.remove_weight_norm(layer) - except: - layer.remove_weight_norm() - - for idx, layer in enumerate(self.output): - if len(layer.state_dict()) != 0: - try: - nn.utils.remove_weight_norm(layer) - except: - layer.remove_weight_norm() - for idx, layer in enumerate(self.generator): - if len(layer.state_dict()) != 0: - try: - nn.utils.remove_weight_norm(layer) - except: - layer.remove_weight_norm() - - def apply_weight_norm(self): - """Apply weight normalization module from all of the layers.""" - - def _apply_weight_norm(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - torch.nn.utils.weight_norm(m) - - self.apply(_apply_weight_norm) - - def inference(self, mel): - hop_length = 256 - # pad input mel with zeros to cut artifact - # see https://github.com/seungwonpark/melgan/issues/8 - zero = torch.full((1, self.mel_channel, 10), -11.5129).to(mel.device) - mel = torch.cat((mel, zero), dim=2) - - audio = self.forward(mel) - return audio - - -''' ----------------------------------------------------------------- - Layer (type) Output Shape Param # -================================================================ - ReflectionPad1d-1 [-1, 80, 506] 0 - Conv1d-2 [-1, 512, 500] 287,232 - LeakyReLU-3 [-1, 512, 500] 0 - ConvTranspose1d-4 [-1, 256, 4008] 2,097,408 - Conv1d-5 [-1, 256, 4008] 65,792 - LeakyReLU-6 [-1, 256, 4008] 0 - ReflectionPad1d-7 [-1, 256, 4010] 0 - Conv1d-8 [-1, 256, 4008] 196,864 - LeakyReLU-9 [-1, 256, 4008] 0 - ReflectionPad1d-10 [-1, 256, 4008] 0 - Conv1d-11 [-1, 256, 4008] 65,792 - LeakyReLU-12 [-1, 256, 4008] 0 - ReflectionPad1d-13 [-1, 256, 4014] 0 - Conv1d-14 [-1, 256, 4008] 196,864 - LeakyReLU-15 [-1, 256, 4008] 0 - ReflectionPad1d-16 [-1, 256, 4008] 0 - Conv1d-17 [-1, 256, 4008] 65,792 - LeakyReLU-18 [-1, 256, 4008] 0 - ReflectionPad1d-19 [-1, 256, 4018] 0 - Conv1d-20 [-1, 256, 4008] 196,864 - LeakyReLU-21 [-1, 256, 4008] 0 - ReflectionPad1d-22 [-1, 256, 4008] 0 - Conv1d-23 [-1, 256, 4008] 65,792 - ResStack-24 [-1, 256, 4008] 0 - Conv1d-25 [-1, 256, 4008] 65,792 - LeakyReLU-26 [-1, 256, 4008] 0 - ReflectionPad1d-27 [-1, 256, 4010] 0 - Conv1d-28 [-1, 256, 4004] 459,008 - LeakyReLU-29 [-1, 256, 4004] 0 - ReflectionPad1d-30 [-1, 256, 4016] 0 - Conv1d-31 [-1, 256, 4016] 65,792 - LeakyReLU-32 [-1, 256, 4016] 0 - ReflectionPad1d-33 [-1, 256, 4022] 0 - Conv1d-34 [-1, 256, 4004] 459,008 - LeakyReLU-35 [-1, 256, 4004] 0 - ReflectionPad1d-36 [-1, 256, 4016] 0 - Conv1d-37 [-1, 256, 4016] 65,792 - LeakyReLU-38 [-1, 256, 4016] 0 - ReflectionPad1d-39 [-1, 256, 4026] 0 - Conv1d-40 [-1, 256, 3996] 459,008 - LeakyReLU-41 [-1, 256, 3996] 0 - ReflectionPad1d-42 [-1, 256, 4008] 0 - Conv1d-43 [-1, 256, 4008] 65,792 - ResStack-44 [-1, 256, 4008] 0 - Conv1d-45 [-1, 256, 4008] 65,792 - LeakyReLU-46 [-1, 256, 4008] 0 - ReflectionPad1d-47 [-1, 256, 4010] 0 - Conv1d-48 [-1, 256, 4000] 721,152 - LeakyReLU-49 [-1, 256, 4000] 0 - ReflectionPad1d-50 [-1, 256, 4024] 0 - Conv1d-51 [-1, 256, 4024] 65,792 - LeakyReLU-52 [-1, 256, 4024] 0 - ReflectionPad1d-53 [-1, 256, 4030] 0 - Conv1d-54 [-1, 256, 4000] 721,152 - LeakyReLU-55 [-1, 256, 4000] 0 - ReflectionPad1d-56 [-1, 256, 4024] 0 - Conv1d-57 [-1, 256, 4024] 65,792 - LeakyReLU-58 [-1, 256, 4024] 0 - ReflectionPad1d-59 [-1, 256, 4034] 0 - Conv1d-60 [-1, 256, 3984] 721,152 - LeakyReLU-61 [-1, 256, 3984] 0 - ReflectionPad1d-62 [-1, 256, 4008] 0 - Conv1d-63 [-1, 256, 4008] 65,792 - ResStack-64 [-1, 256, 4008] 0 - MRF-65 [-1, 256, 4008] 0 - LeakyReLU-66 [-1, 256, 4008] 0 - ConvTranspose1d-67 [-1, 128, 32072] 524,416 - Conv1d-68 [-1, 128, 32072] 16,512 - LeakyReLU-69 [-1, 128, 32072] 0 - ReflectionPad1d-70 [-1, 128, 32074] 0 - Conv1d-71 [-1, 128, 32072] 49,280 - LeakyReLU-72 [-1, 128, 32072] 0 - ReflectionPad1d-73 [-1, 128, 32072] 0 - Conv1d-74 [-1, 128, 32072] 16,512 - LeakyReLU-75 [-1, 128, 32072] 0 - ReflectionPad1d-76 [-1, 128, 32078] 0 - Conv1d-77 [-1, 128, 32072] 49,280 - LeakyReLU-78 [-1, 128, 32072] 0 - ReflectionPad1d-79 [-1, 128, 32072] 0 - Conv1d-80 [-1, 128, 32072] 16,512 - LeakyReLU-81 [-1, 128, 32072] 0 - ReflectionPad1d-82 [-1, 128, 32082] 0 - Conv1d-83 [-1, 128, 32072] 49,280 - LeakyReLU-84 [-1, 128, 32072] 0 - ReflectionPad1d-85 [-1, 128, 32072] 0 - Conv1d-86 [-1, 128, 32072] 16,512 - ResStack-87 [-1, 128, 32072] 0 - Conv1d-88 [-1, 128, 32072] 16,512 - LeakyReLU-89 [-1, 128, 32072] 0 - ReflectionPad1d-90 [-1, 128, 32074] 0 - Conv1d-91 [-1, 128, 32068] 114,816 - LeakyReLU-92 [-1, 128, 32068] 0 - ReflectionPad1d-93 [-1, 128, 32080] 0 - Conv1d-94 [-1, 128, 32080] 16,512 - LeakyReLU-95 [-1, 128, 32080] 0 - ReflectionPad1d-96 [-1, 128, 32086] 0 - Conv1d-97 [-1, 128, 32068] 114,816 - LeakyReLU-98 [-1, 128, 32068] 0 - ReflectionPad1d-99 [-1, 128, 32080] 0 - Conv1d-100 [-1, 128, 32080] 16,512 - LeakyReLU-101 [-1, 128, 32080] 0 - ReflectionPad1d-102 [-1, 128, 32090] 0 - Conv1d-103 [-1, 128, 32060] 114,816 - LeakyReLU-104 [-1, 128, 32060] 0 - ReflectionPad1d-105 [-1, 128, 32072] 0 - Conv1d-106 [-1, 128, 32072] 16,512 - ResStack-107 [-1, 128, 32072] 0 - Conv1d-108 [-1, 128, 32072] 16,512 - LeakyReLU-109 [-1, 128, 32072] 0 - ReflectionPad1d-110 [-1, 128, 32074] 0 - Conv1d-111 [-1, 128, 32064] 180,352 - LeakyReLU-112 [-1, 128, 32064] 0 - ReflectionPad1d-113 [-1, 128, 32088] 0 - Conv1d-114 [-1, 128, 32088] 16,512 - LeakyReLU-115 [-1, 128, 32088] 0 - ReflectionPad1d-116 [-1, 128, 32094] 0 - Conv1d-117 [-1, 128, 32064] 180,352 - LeakyReLU-118 [-1, 128, 32064] 0 - ReflectionPad1d-119 [-1, 128, 32088] 0 - Conv1d-120 [-1, 128, 32088] 16,512 - LeakyReLU-121 [-1, 128, 32088] 0 - ReflectionPad1d-122 [-1, 128, 32098] 0 - Conv1d-123 [-1, 128, 32048] 180,352 - LeakyReLU-124 [-1, 128, 32048] 0 - ReflectionPad1d-125 [-1, 128, 32072] 0 - Conv1d-126 [-1, 128, 32072] 16,512 - ResStack-127 [-1, 128, 32072] 0 - MRF-128 [-1, 128, 32072] 0 - LeakyReLU-129 [-1, 128, 32072] 0 - ConvTranspose1d-130 [-1, 64, 64146] 32,832 - Conv1d-131 [-1, 64, 64146] 4,160 - LeakyReLU-132 [-1, 64, 64146] 0 - ReflectionPad1d-133 [-1, 64, 64148] 0 - Conv1d-134 [-1, 64, 64146] 12,352 - LeakyReLU-135 [-1, 64, 64146] 0 - ReflectionPad1d-136 [-1, 64, 64146] 0 - Conv1d-137 [-1, 64, 64146] 4,160 - LeakyReLU-138 [-1, 64, 64146] 0 - ReflectionPad1d-139 [-1, 64, 64152] 0 - Conv1d-140 [-1, 64, 64146] 12,352 - LeakyReLU-141 [-1, 64, 64146] 0 - ReflectionPad1d-142 [-1, 64, 64146] 0 - Conv1d-143 [-1, 64, 64146] 4,160 - LeakyReLU-144 [-1, 64, 64146] 0 - ReflectionPad1d-145 [-1, 64, 64156] 0 - Conv1d-146 [-1, 64, 64146] 12,352 - LeakyReLU-147 [-1, 64, 64146] 0 - ReflectionPad1d-148 [-1, 64, 64146] 0 - Conv1d-149 [-1, 64, 64146] 4,160 - ResStack-150 [-1, 64, 64146] 0 - Conv1d-151 [-1, 64, 64146] 4,160 - LeakyReLU-152 [-1, 64, 64146] 0 - ReflectionPad1d-153 [-1, 64, 64148] 0 - Conv1d-154 [-1, 64, 64142] 28,736 - LeakyReLU-155 [-1, 64, 64142] 0 - ReflectionPad1d-156 [-1, 64, 64154] 0 - Conv1d-157 [-1, 64, 64154] 4,160 - LeakyReLU-158 [-1, 64, 64154] 0 - ReflectionPad1d-159 [-1, 64, 64160] 0 - Conv1d-160 [-1, 64, 64142] 28,736 - LeakyReLU-161 [-1, 64, 64142] 0 - ReflectionPad1d-162 [-1, 64, 64154] 0 - Conv1d-163 [-1, 64, 64154] 4,160 - LeakyReLU-164 [-1, 64, 64154] 0 - ReflectionPad1d-165 [-1, 64, 64164] 0 - Conv1d-166 [-1, 64, 64134] 28,736 - LeakyReLU-167 [-1, 64, 64134] 0 - ReflectionPad1d-168 [-1, 64, 64146] 0 - Conv1d-169 [-1, 64, 64146] 4,160 - ResStack-170 [-1, 64, 64146] 0 - Conv1d-171 [-1, 64, 64146] 4,160 - LeakyReLU-172 [-1, 64, 64146] 0 - ReflectionPad1d-173 [-1, 64, 64148] 0 - Conv1d-174 [-1, 64, 64138] 45,120 - LeakyReLU-175 [-1, 64, 64138] 0 - ReflectionPad1d-176 [-1, 64, 64162] 0 - Conv1d-177 [-1, 64, 64162] 4,160 - LeakyReLU-178 [-1, 64, 64162] 0 - ReflectionPad1d-179 [-1, 64, 64168] 0 - Conv1d-180 [-1, 64, 64138] 45,120 - LeakyReLU-181 [-1, 64, 64138] 0 - ReflectionPad1d-182 [-1, 64, 64162] 0 - Conv1d-183 [-1, 64, 64162] 4,160 - LeakyReLU-184 [-1, 64, 64162] 0 - ReflectionPad1d-185 [-1, 64, 64172] 0 - Conv1d-186 [-1, 64, 64122] 45,120 - LeakyReLU-187 [-1, 64, 64122] 0 - ReflectionPad1d-188 [-1, 64, 64146] 0 - Conv1d-189 [-1, 64, 64146] 4,160 - ResStack-190 [-1, 64, 64146] 0 - MRF-191 [-1, 64, 64146] 0 - LeakyReLU-192 [-1, 64, 64146] 0 - ConvTranspose1d-193 [-1, 32, 128294] 8,224 - Conv1d-194 [-1, 32, 128294] 1,056 - LeakyReLU-195 [-1, 32, 128294] 0 - ReflectionPad1d-196 [-1, 32, 128296] 0 - Conv1d-197 [-1, 32, 128294] 3,104 - LeakyReLU-198 [-1, 32, 128294] 0 - ReflectionPad1d-199 [-1, 32, 128294] 0 - Conv1d-200 [-1, 32, 128294] 1,056 - LeakyReLU-201 [-1, 32, 128294] 0 - ReflectionPad1d-202 [-1, 32, 128300] 0 - Conv1d-203 [-1, 32, 128294] 3,104 - LeakyReLU-204 [-1, 32, 128294] 0 - ReflectionPad1d-205 [-1, 32, 128294] 0 - Conv1d-206 [-1, 32, 128294] 1,056 - LeakyReLU-207 [-1, 32, 128294] 0 - ReflectionPad1d-208 [-1, 32, 128304] 0 - Conv1d-209 [-1, 32, 128294] 3,104 - LeakyReLU-210 [-1, 32, 128294] 0 - ReflectionPad1d-211 [-1, 32, 128294] 0 - Conv1d-212 [-1, 32, 128294] 1,056 - ResStack-213 [-1, 32, 128294] 0 - Conv1d-214 [-1, 32, 128294] 1,056 - LeakyReLU-215 [-1, 32, 128294] 0 - ReflectionPad1d-216 [-1, 32, 128296] 0 - Conv1d-217 [-1, 32, 128290] 7,200 - LeakyReLU-218 [-1, 32, 128290] 0 - ReflectionPad1d-219 [-1, 32, 128302] 0 - Conv1d-220 [-1, 32, 128302] 1,056 - LeakyReLU-221 [-1, 32, 128302] 0 - ReflectionPad1d-222 [-1, 32, 128308] 0 - Conv1d-223 [-1, 32, 128290] 7,200 - LeakyReLU-224 [-1, 32, 128290] 0 - ReflectionPad1d-225 [-1, 32, 128302] 0 - Conv1d-226 [-1, 32, 128302] 1,056 - LeakyReLU-227 [-1, 32, 128302] 0 - ReflectionPad1d-228 [-1, 32, 128312] 0 - Conv1d-229 [-1, 32, 128282] 7,200 - LeakyReLU-230 [-1, 32, 128282] 0 - ReflectionPad1d-231 [-1, 32, 128294] 0 - Conv1d-232 [-1, 32, 128294] 1,056 - ResStack-233 [-1, 32, 128294] 0 - Conv1d-234 [-1, 32, 128294] 1,056 - LeakyReLU-235 [-1, 32, 128294] 0 - ReflectionPad1d-236 [-1, 32, 128296] 0 - Conv1d-237 [-1, 32, 128286] 11,296 - LeakyReLU-238 [-1, 32, 128286] 0 - ReflectionPad1d-239 [-1, 32, 128310] 0 - Conv1d-240 [-1, 32, 128310] 1,056 - LeakyReLU-241 [-1, 32, 128310] 0 - ReflectionPad1d-242 [-1, 32, 128316] 0 - Conv1d-243 [-1, 32, 128286] 11,296 - LeakyReLU-244 [-1, 32, 128286] 0 - ReflectionPad1d-245 [-1, 32, 128310] 0 - Conv1d-246 [-1, 32, 128310] 1,056 - LeakyReLU-247 [-1, 32, 128310] 0 - ReflectionPad1d-248 [-1, 32, 128320] 0 - Conv1d-249 [-1, 32, 128270] 11,296 - LeakyReLU-250 [-1, 32, 128270] 0 - ReflectionPad1d-251 [-1, 32, 128294] 0 - Conv1d-252 [-1, 32, 128294] 1,056 - ResStack-253 [-1, 32, 128294] 0 - MRF-254 [-1, 32, 128294] 0 - LeakyReLU-255 [-1, 32, 128294] 0 - ReflectionPad1d-256 [-1, 32, 128300] 0 - Conv1d-257 [-1, 1, 128294] 225 - Tanh-258 [-1, 1, 128294] 0 -================================================================ -Total params: 9,488,417 -Trainable params: 9,488,417 -Non-trainable params: 0 ----------------------------------------------------------------- -Input size (MB): 0.15 -Forward/backward pass size (MB): 6450.82 -Params size (MB): 36.20 -Estimated Total Size (MB): 6487.17 ----------------------------------------------------------------- -''' diff --git a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/detectors/__init__.py b/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/detectors/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/model/metrics.py b/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/model/metrics.py deleted file mode 100644 index 668daaf99acb9bbb80d7ca2746926f9d79d55cf0..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/model/metrics.py +++ /dev/null @@ -1,589 +0,0 @@ -""" -This file implements the evaluation metrics. -""" -import torch -import torch.nn.functional as F -import numpy as np -from torchvision.ops.boxes import batched_nms - -from ..misc.geometry_utils import keypoints_to_grid - - -class Metrics(object): - """Metric evaluation calculator.""" - - def __init__( - self, - detection_thresh, - prob_thresh, - grid_size, - junc_metric_lst=None, - heatmap_metric_lst=None, - pr_metric_lst=None, - desc_metric_lst=None, - ): - # List supported metrics - self.supported_junc_metrics = [ - "junc_precision", - "junc_precision_nms", - "junc_recall", - "junc_recall_nms", - ] - self.supported_heatmap_metrics = ["heatmap_precision", "heatmap_recall"] - self.supported_pr_metrics = ["junc_pr", "junc_nms_pr"] - self.supported_desc_metrics = ["matching_score"] - - # If metric_lst is None, default to use all metrics - if junc_metric_lst is None: - self.junc_metric_lst = self.supported_junc_metrics - else: - self.junc_metric_lst = junc_metric_lst - if heatmap_metric_lst is None: - self.heatmap_metric_lst = self.supported_heatmap_metrics - else: - self.heatmap_metric_lst = heatmap_metric_lst - if pr_metric_lst is None: - self.pr_metric_lst = self.supported_pr_metrics - else: - self.pr_metric_lst = pr_metric_lst - # For the descriptors, the default None assumes no desc metric at all - if desc_metric_lst is None: - self.desc_metric_lst = [] - elif desc_metric_lst == "all": - self.desc_metric_lst = self.supported_desc_metrics - else: - self.desc_metric_lst = desc_metric_lst - - if not self._check_metrics(): - raise ValueError("[Error] Some elements in the metric_lst are invalid.") - - # Metric mapping table - self.metric_table = { - "junc_precision": junction_precision(detection_thresh), - "junc_precision_nms": junction_precision(detection_thresh), - "junc_recall": junction_recall(detection_thresh), - "junc_recall_nms": junction_recall(detection_thresh), - "heatmap_precision": heatmap_precision(prob_thresh), - "heatmap_recall": heatmap_recall(prob_thresh), - "junc_pr": junction_pr(), - "junc_nms_pr": junction_pr(), - "matching_score": matching_score(grid_size), - } - - # Initialize the results - self.metric_results = {} - for key in self.metric_table.keys(): - self.metric_results[key] = 0.0 - - def evaluate( - self, - junc_pred, - junc_pred_nms, - junc_gt, - heatmap_pred, - heatmap_gt, - valid_mask, - line_points1=None, - line_points2=None, - desc_pred1=None, - desc_pred2=None, - valid_points=None, - ): - """Perform evaluation.""" - for metric in self.junc_metric_lst: - # If nms metrics then use nms to compute it. - if "nms" in metric: - junc_pred_input = junc_pred_nms - # Use normal inputs instead. - else: - junc_pred_input = junc_pred - self.metric_results[metric] = self.metric_table[metric]( - junc_pred_input, junc_gt, valid_mask - ) - - for metric in self.heatmap_metric_lst: - self.metric_results[metric] = self.metric_table[metric]( - heatmap_pred, heatmap_gt, valid_mask - ) - - for metric in self.pr_metric_lst: - if "nms" in metric: - self.metric_results[metric] = self.metric_table[metric]( - junc_pred_nms, junc_gt, valid_mask - ) - else: - self.metric_results[metric] = self.metric_table[metric]( - junc_pred, junc_gt, valid_mask - ) - - for metric in self.desc_metric_lst: - self.metric_results[metric] = self.metric_table[metric]( - line_points1, line_points2, desc_pred1, desc_pred2, valid_points - ) - - def _check_metrics(self): - """Check if all input metrics are valid.""" - flag = True - for metric in self.junc_metric_lst: - if not metric in self.supported_junc_metrics: - flag = False - break - for metric in self.heatmap_metric_lst: - if not metric in self.supported_heatmap_metrics: - flag = False - break - for metric in self.desc_metric_lst: - if not metric in self.supported_desc_metrics: - flag = False - break - - return flag - - -class AverageMeter(object): - def __init__( - self, - junc_metric_lst=None, - heatmap_metric_lst=None, - is_training=True, - desc_metric_lst=None, - ): - # List supported metrics - self.supported_junc_metrics = [ - "junc_precision", - "junc_precision_nms", - "junc_recall", - "junc_recall_nms", - ] - self.supported_heatmap_metrics = ["heatmap_precision", "heatmap_recall"] - self.supported_pr_metrics = ["junc_pr", "junc_nms_pr"] - self.supported_desc_metrics = ["matching_score"] - # Record loss in training mode - # if is_training: - self.supported_loss = [ - "junc_loss", - "heatmap_loss", - "descriptor_loss", - "total_loss", - ] - - self.is_training = is_training - - # If metric_lst is None, default to use all metrics - if junc_metric_lst is None: - self.junc_metric_lst = self.supported_junc_metrics - else: - self.junc_metric_lst = junc_metric_lst - if heatmap_metric_lst is None: - self.heatmap_metric_lst = self.supported_heatmap_metrics - else: - self.heatmap_metric_lst = heatmap_metric_lst - # For the descriptors, the default None assumes no desc metric at all - if desc_metric_lst is None: - self.desc_metric_lst = [] - elif desc_metric_lst == "all": - self.desc_metric_lst = self.supported_desc_metrics - else: - self.desc_metric_lst = desc_metric_lst - - if not self._check_metrics(): - raise ValueError("[Error] Some elements in the metric_lst are invalid.") - - # Initialize the results - self.metric_results = {} - for key in ( - self.supported_junc_metrics - + self.supported_heatmap_metrics - + self.supported_loss - + self.supported_desc_metrics - ): - self.metric_results[key] = 0.0 - for key in self.supported_pr_metrics: - zero_lst = [0 for _ in range(50)] - self.metric_results[key] = { - "tp": zero_lst, - "tn": zero_lst, - "fp": zero_lst, - "fn": zero_lst, - "precision": zero_lst, - "recall": zero_lst, - } - - # Initialize total count - self.count = 0 - - def update(self, metrics, loss_dict=None, num_samples=1): - # loss should be given in the training mode - if self.is_training and (loss_dict is None): - raise ValueError("[Error] loss info should be given in the training mode.") - - # update total counts - self.count += num_samples - - # update all the metrics - for met in ( - self.supported_junc_metrics - + self.supported_heatmap_metrics - + self.supported_desc_metrics - ): - self.metric_results[met] += num_samples * metrics.metric_results[met] - - # Update all the losses - for loss in loss_dict.keys(): - self.metric_results[loss] += num_samples * loss_dict[loss] - - # Update all pr counts - for pr_met in self.supported_pr_metrics: - # Update all tp, tn, fp, fn, precision, and recall. - for key in metrics.metric_results[pr_met].keys(): - # Update each interval - for idx in range(len(self.metric_results[pr_met][key])): - self.metric_results[pr_met][key][idx] += ( - num_samples * metrics.metric_results[pr_met][key][idx] - ) - - def average(self): - results = {} - for met in self.metric_results.keys(): - # Skip pr curve metrics - if not met in self.supported_pr_metrics: - results[met] = self.metric_results[met] / self.count - # Only update precision and recall in pr metrics - else: - met_results = { - "tp": self.metric_results[met]["tp"], - "tn": self.metric_results[met]["tn"], - "fp": self.metric_results[met]["fp"], - "fn": self.metric_results[met]["fn"], - "precision": [], - "recall": [], - } - for idx in range(len(self.metric_results[met]["precision"])): - met_results["precision"].append( - self.metric_results[met]["precision"][idx] / self.count - ) - met_results["recall"].append( - self.metric_results[met]["recall"][idx] / self.count - ) - - results[met] = met_results - - return results - - def _check_metrics(self): - """Check if all input metrics are valid.""" - flag = True - for metric in self.junc_metric_lst: - if not metric in self.supported_junc_metrics: - flag = False - break - for metric in self.heatmap_metric_lst: - if not metric in self.supported_heatmap_metrics: - flag = False - break - for metric in self.desc_metric_lst: - if not metric in self.supported_desc_metrics: - flag = False - break - - return flag - - -class junction_precision(object): - """Junction precision.""" - - def __init__(self, detection_thresh): - self.detection_thresh = detection_thresh - - # Compute the evaluation result - def __call__(self, junc_pred, junc_gt, valid_mask): - # Convert prediction to discrete detection - junc_pred = (junc_pred >= self.detection_thresh).astype(np.int) - junc_pred = junc_pred * valid_mask.squeeze() - - # Deal with the corner case of the prediction - if np.sum(junc_pred) > 0: - precision = np.sum(junc_pred * junc_gt.squeeze()) / np.sum(junc_pred) - else: - precision = 0 - - return float(precision) - - -class junction_recall(object): - """Junction recall.""" - - def __init__(self, detection_thresh): - self.detection_thresh = detection_thresh - - # Compute the evaluation result - def __call__(self, junc_pred, junc_gt, valid_mask): - # Convert prediction to discrete detection - junc_pred = (junc_pred >= self.detection_thresh).astype(np.int) - junc_pred = junc_pred * valid_mask.squeeze() - - # Deal with the corner case of the recall. - if np.sum(junc_gt): - recall = np.sum(junc_pred * junc_gt.squeeze()) / np.sum(junc_gt) - else: - recall = 0 - - return float(recall) - - -class junction_pr(object): - """Junction precision-recall info.""" - - def __init__(self, num_threshold=50): - self.max = 0.4 - step = self.max / num_threshold - self.min = step - self.intervals = np.flip(np.arange(self.min, self.max + step, step)) - - def __call__(self, junc_pred_raw, junc_gt, valid_mask): - tp_lst = [] - fp_lst = [] - tn_lst = [] - fn_lst = [] - precision_lst = [] - recall_lst = [] - - valid_mask = valid_mask.squeeze() - # Iterate through all the thresholds - for thresh in list(self.intervals): - # Convert prediction to discrete detection - junc_pred = (junc_pred_raw >= thresh).astype(np.int) - junc_pred = junc_pred * valid_mask - - # Compute tp, fp, tn, fn - junc_gt = junc_gt.squeeze() - tp = np.sum(junc_pred * junc_gt) - tn = np.sum( - (junc_pred == 0).astype(np.float) - * (junc_gt == 0).astype(np.float) - * valid_mask - ) - fp = np.sum( - (junc_pred == 1).astype(np.float) - * (junc_gt == 0).astype(np.float) - * valid_mask - ) - fn = np.sum( - (junc_pred == 0).astype(np.float) - * (junc_gt == 1).astype(np.float) - * valid_mask - ) - - tp_lst.append(tp) - tn_lst.append(tn) - fp_lst.append(fp) - fn_lst.append(fn) - precision_lst.append(tp / (tp + fp)) - recall_lst.append(tp / (tp + fn)) - - return { - "tp": np.array(tp_lst), - "tn": np.array(tn_lst), - "fp": np.array(fp_lst), - "fn": np.array(fn_lst), - "precision": np.array(precision_lst), - "recall": np.array(recall_lst), - } - - -class heatmap_precision(object): - """Heatmap precision.""" - - def __init__(self, prob_thresh): - self.prob_thresh = prob_thresh - - def __call__(self, heatmap_pred, heatmap_gt, valid_mask): - # Assume NHWC (Handle L1 and L2 cases) NxHxWx1 - heatmap_pred = np.squeeze(heatmap_pred > self.prob_thresh) - heatmap_pred = heatmap_pred * valid_mask.squeeze() - - # Deal with the corner case of the prediction - if np.sum(heatmap_pred) > 0: - precision = np.sum(heatmap_pred * heatmap_gt.squeeze()) / np.sum( - heatmap_pred - ) - else: - precision = 0.0 - - return precision - - -class heatmap_recall(object): - """Heatmap recall.""" - - def __init__(self, prob_thresh): - self.prob_thresh = prob_thresh - - def __call__(self, heatmap_pred, heatmap_gt, valid_mask): - # Assume NHWC (Handle L1 and L2 cases) NxHxWx1 - heatmap_pred = np.squeeze(heatmap_pred > self.prob_thresh) - heatmap_pred = heatmap_pred * valid_mask.squeeze() - - # Deal with the corner case of the ground truth - if np.sum(heatmap_gt) > 0: - recall = np.sum(heatmap_pred * heatmap_gt.squeeze()) / np.sum(heatmap_gt) - else: - recall = 0.0 - - return recall - - -class matching_score(object): - """Descriptors matching score.""" - - def __init__(self, grid_size): - self.grid_size = grid_size - - def __call__(self, points1, points2, desc_pred1, desc_pred2, line_indices): - b_size, _, Hc, Wc = desc_pred1.size() - img_size = (Hc * self.grid_size, Wc * self.grid_size) - device = desc_pred1.device - - # Extract valid keypoints - n_points = line_indices.size()[1] - valid_points = line_indices.bool().flatten() - n_correct_points = torch.sum(valid_points).item() - if n_correct_points == 0: - return torch.tensor(0.0, dtype=torch.float, device=device) - - # Convert the keypoints to a grid suitable for interpolation - grid1 = keypoints_to_grid(points1, img_size) - grid2 = keypoints_to_grid(points2, img_size) - - # Extract the descriptors - desc1 = ( - F.grid_sample(desc_pred1, grid1) - .permute(0, 2, 3, 1) - .reshape(b_size * n_points, -1)[valid_points] - ) - desc1 = F.normalize(desc1, dim=1) - desc2 = ( - F.grid_sample(desc_pred2, grid2) - .permute(0, 2, 3, 1) - .reshape(b_size * n_points, -1)[valid_points] - ) - desc2 = F.normalize(desc2, dim=1) - desc_dists = 2 - 2 * (desc1 @ desc2.t()) - - # Compute percentage of correct matches - matches0 = torch.min(desc_dists, dim=1)[1] - matches1 = torch.min(desc_dists, dim=0)[1] - matching_score = matches1[matches0] == torch.arange(len(matches0)).to(device) - matching_score = matching_score.float().mean() - return matching_score - - -def super_nms(prob_predictions, dist_thresh, prob_thresh=0.01, top_k=0): - """Non-maximum suppression adapted from SuperPoint.""" - # Iterate through batch dimension - im_h = prob_predictions.shape[1] - im_w = prob_predictions.shape[2] - output_lst = [] - for i in range(prob_predictions.shape[0]): - # print(i) - prob_pred = prob_predictions[i, ...] - # Filter the points using prob_thresh - coord = np.where(prob_pred >= prob_thresh) # HW format - points = np.concatenate( - (coord[0][..., None], coord[1][..., None]), axis=1 - ) # HW format - - # Get the probability score - prob_score = prob_pred[points[:, 0], points[:, 1]] - - # Perform super nms - # Modify the in_points to xy format (instead of HW format) - in_points = np.concatenate( - (coord[1][..., None], coord[0][..., None], prob_score), axis=1 - ).T - keep_points_, keep_inds = nms_fast(in_points, im_h, im_w, dist_thresh) - # Remember to flip outputs back to HW format - keep_points = np.round(np.flip(keep_points_[:2, :], axis=0).T) - keep_score = keep_points_[-1, :].T - - # Whether we only keep the topk value - if (top_k > 0) or (top_k is None): - k = min([keep_points.shape[0], top_k]) - keep_points = keep_points[:k, :] - keep_score = keep_score[:k] - - # Re-compose the probability map - output_map = np.zeros([im_h, im_w]) - output_map[ - keep_points[:, 0].astype(np.int), keep_points[:, 1].astype(np.int) - ] = keep_score.squeeze() - - output_lst.append(output_map[None, ...]) - - return np.concatenate(output_lst, axis=0) - - -def nms_fast(in_corners, H, W, dist_thresh): - """ - Run a faster approximate Non-Max-Suppression on numpy corners shaped: - 3xN [x_i,y_i,conf_i]^T - - Algo summary: Create a grid sized HxW. Assign each corner location a 1, - rest are zeros. Iterate through all the 1's and convert them to -1 or 0. - Suppress points by setting nearby values to 0. - - Grid Value Legend: - -1 : Kept. - 0 : Empty or suppressed. - 1 : To be processed (converted to either kept or supressed). - - NOTE: The NMS first rounds points to integers, so NMS distance might not - be exactly dist_thresh. It also assumes points are within image boundary. - - Inputs - in_corners - 3xN numpy array with corners [x_i, y_i, confidence_i]^T. - H - Image height. - W - Image width. - dist_thresh - Distance to suppress, measured as an infinite distance. - Returns - nmsed_corners - 3xN numpy matrix with surviving corners. - nmsed_inds - N length numpy vector with surviving corner indices. - """ - grid = np.zeros((H, W)).astype(int) # Track NMS data. - inds = np.zeros((H, W)).astype(int) # Store indices of points. - # Sort by confidence and round to nearest int. - inds1 = np.argsort(-in_corners[2, :]) - corners = in_corners[:, inds1] - rcorners = corners[:2, :].round().astype(int) # Rounded corners. - # Check for edge case of 0 or 1 corners. - if rcorners.shape[1] == 0: - return np.zeros((3, 0)).astype(int), np.zeros(0).astype(int) - if rcorners.shape[1] == 1: - out = np.vstack((rcorners, in_corners[2])).reshape(3, 1) - return out, np.zeros((1)).astype(int) - # Initialize the grid. - for i, rc in enumerate(rcorners.T): - grid[rcorners[1, i], rcorners[0, i]] = 1 - inds[rcorners[1, i], rcorners[0, i]] = i - # Pad the border of the grid, so that we can NMS points near the border. - pad = dist_thresh - grid = np.pad(grid, ((pad, pad), (pad, pad)), mode="constant") - # Iterate through points, highest to lowest conf, suppress neighborhood. - count = 0 - for i, rc in enumerate(rcorners.T): - # Account for top and left padding. - pt = (rc[0] + pad, rc[1] + pad) - if grid[pt[1], pt[0]] == 1: # If not yet suppressed. - grid[pt[1] - pad : pt[1] + pad + 1, pt[0] - pad : pt[0] + pad + 1] = 0 - grid[pt[1], pt[0]] = -1 - count += 1 - # Get all surviving -1's and return sorted array of remaining corners. - keepy, keepx = np.where(grid == -1) - keepy, keepx = keepy - pad, keepx - pad - inds_keep = inds[keepy, keepx] - out = corners[:, inds_keep] - values = out[-1, :] - inds2 = np.argsort(-values) - out = out[:, inds2] - out_inds = inds1[inds_keep[inds2]] - return out, out_inds diff --git a/spaces/Realcat/image-matching-webui/third_party/d2net/lib/utils.py b/spaces/Realcat/image-matching-webui/third_party/d2net/lib/utils.py deleted file mode 100644 index d612d2ecc543c9cf2cf405395b05e2dba6a29d46..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/d2net/lib/utils.py +++ /dev/null @@ -1,167 +0,0 @@ -import matplotlib.pyplot as plt - -import numpy as np - -import torch - -from lib.exceptions import EmptyTensorError - - -def preprocess_image(image, preprocessing=None): - image = image.astype(np.float32) - image = np.transpose(image, [2, 0, 1]) - if preprocessing is None: - pass - elif preprocessing == 'caffe': - # RGB -> BGR - image = image[:: -1, :, :] - # Zero-center by mean pixel - mean = np.array([103.939, 116.779, 123.68]) - image = image - mean.reshape([3, 1, 1]) - elif preprocessing == 'torch': - image /= 255.0 - mean = np.array([0.485, 0.456, 0.406]) - std = np.array([0.229, 0.224, 0.225]) - image = (image - mean.reshape([3, 1, 1])) / std.reshape([3, 1, 1]) - else: - raise ValueError('Unknown preprocessing parameter.') - return image - - -def imshow_image(image, preprocessing=None): - if preprocessing is None: - pass - elif preprocessing == 'caffe': - mean = np.array([103.939, 116.779, 123.68]) - image = image + mean.reshape([3, 1, 1]) - # RGB -> BGR - image = image[:: -1, :, :] - elif preprocessing == 'torch': - mean = np.array([0.485, 0.456, 0.406]) - std = np.array([0.229, 0.224, 0.225]) - image = image * std.reshape([3, 1, 1]) + mean.reshape([3, 1, 1]) - image *= 255.0 - else: - raise ValueError('Unknown preprocessing parameter.') - image = np.transpose(image, [1, 2, 0]) - image = np.round(image).astype(np.uint8) - return image - - -def grid_positions(h, w, device, matrix=False): - lines = torch.arange( - 0, h, device=device - ).view(-1, 1).float().repeat(1, w) - columns = torch.arange( - 0, w, device=device - ).view(1, -1).float().repeat(h, 1) - if matrix: - return torch.stack([lines, columns], dim=0) - else: - return torch.cat([lines.view(1, -1), columns.view(1, -1)], dim=0) - - -def upscale_positions(pos, scaling_steps=0): - for _ in range(scaling_steps): - pos = pos * 2 + 0.5 - return pos - - -def downscale_positions(pos, scaling_steps=0): - for _ in range(scaling_steps): - pos = (pos - 0.5) / 2 - return pos - - -def interpolate_dense_features(pos, dense_features, return_corners=False): - device = pos.device - - ids = torch.arange(0, pos.size(1), device=device) - - _, h, w = dense_features.size() - - i = pos[0, :] - j = pos[1, :] - - # Valid corners - i_top_left = torch.floor(i).long() - j_top_left = torch.floor(j).long() - valid_top_left = torch.min(i_top_left >= 0, j_top_left >= 0) - - i_top_right = torch.floor(i).long() - j_top_right = torch.ceil(j).long() - valid_top_right = torch.min(i_top_right >= 0, j_top_right < w) - - i_bottom_left = torch.ceil(i).long() - j_bottom_left = torch.floor(j).long() - valid_bottom_left = torch.min(i_bottom_left < h, j_bottom_left >= 0) - - i_bottom_right = torch.ceil(i).long() - j_bottom_right = torch.ceil(j).long() - valid_bottom_right = torch.min(i_bottom_right < h, j_bottom_right < w) - - valid_corners = torch.min( - torch.min(valid_top_left, valid_top_right), - torch.min(valid_bottom_left, valid_bottom_right) - ) - - i_top_left = i_top_left[valid_corners] - j_top_left = j_top_left[valid_corners] - - i_top_right = i_top_right[valid_corners] - j_top_right = j_top_right[valid_corners] - - i_bottom_left = i_bottom_left[valid_corners] - j_bottom_left = j_bottom_left[valid_corners] - - i_bottom_right = i_bottom_right[valid_corners] - j_bottom_right = j_bottom_right[valid_corners] - - ids = ids[valid_corners] - if ids.size(0) == 0: - raise EmptyTensorError - - # Interpolation - i = i[ids] - j = j[ids] - dist_i_top_left = i - i_top_left.float() - dist_j_top_left = j - j_top_left.float() - w_top_left = (1 - dist_i_top_left) * (1 - dist_j_top_left) - w_top_right = (1 - dist_i_top_left) * dist_j_top_left - w_bottom_left = dist_i_top_left * (1 - dist_j_top_left) - w_bottom_right = dist_i_top_left * dist_j_top_left - - descriptors = ( - w_top_left * dense_features[:, i_top_left, j_top_left] + - w_top_right * dense_features[:, i_top_right, j_top_right] + - w_bottom_left * dense_features[:, i_bottom_left, j_bottom_left] + - w_bottom_right * dense_features[:, i_bottom_right, j_bottom_right] - ) - - pos = torch.cat([i.view(1, -1), j.view(1, -1)], dim=0) - - if not return_corners: - return [descriptors, pos, ids] - else: - corners = torch.stack([ - torch.stack([i_top_left, j_top_left], dim=0), - torch.stack([i_top_right, j_top_right], dim=0), - torch.stack([i_bottom_left, j_bottom_left], dim=0), - torch.stack([i_bottom_right, j_bottom_right], dim=0) - ], dim=0) - return [descriptors, pos, ids, corners] - - -def savefig(filepath, fig=None, dpi=None): - # TomNorway - https://stackoverflow.com/a/53516034 - if not fig: - fig = plt.gcf() - - plt.subplots_adjust(0, 0, 1, 1, 0, 0) - for ax in fig.axes: - ax.axis('off') - ax.margins(0, 0) - ax.xaxis.set_major_locator(plt.NullLocator()) - ax.yaxis.set_major_locator(plt.NullLocator()) - - fig.savefig(filepath, pad_inches=0, bbox_inches='tight', dpi=dpi) diff --git a/spaces/Realcat/image-matching-webui/third_party/lanet/network_v1/model.py b/spaces/Realcat/image-matching-webui/third_party/lanet/network_v1/model.py deleted file mode 100644 index 51ca366db1d8afd76722f5c51ccfbf8b081c61e2..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/lanet/network_v1/model.py +++ /dev/null @@ -1,55 +0,0 @@ -import torch -import torch.nn as nn -import torchvision.transforms as tvf - -from .modules import InterestPointModule, CorrespondenceModule - - -def warp_homography_batch(sources, homographies): - """ - Batch warp keypoints given homographies. From https://github.com/TRI-ML/KP2D. - - Parameters - ---------- - sources: torch.Tensor (B,H,W,C) - Keypoints vector. - homographies: torch.Tensor (B,3,3) - Homographies. - - Returns - ------- - warped_sources: torch.Tensor (B,H,W,C) - Warped keypoints vector. - """ - B, H, W, _ = sources.shape - warped_sources = [] - for b in range(B): - source = sources[b].clone() - source = source.view(-1, 2) - """ - [X, [M11, M12, M13 [x, M11*x + M12*y + M13 [M11, M12 [M13, - Y, = M21, M22, M23 * y, = M21*x + M22*y + M23 = [x, y] * M21, M22 + M23, - Z] M31, M32, M33] 1] M31*x + M32*y + M33 M31, M32].T M33] - """ - source = torch.addmm(homographies[b, :, 2], source, homographies[b, :, :2].t()) - source.mul_(1 / source[:, 2].unsqueeze(1)) - source = source[:, :2].contiguous().view(H, W, 2) - warped_sources.append(source) - return torch.stack(warped_sources, dim=0) - - -class PointModel(nn.Module): - def __init__(self, is_test=False): - super(PointModel, self).__init__() - self.is_test = is_test - self.interestpoint_module = InterestPointModule(is_test=self.is_test) - self.correspondence_module = CorrespondenceModule() - self.norm_rgb = tvf.Normalize( - mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] - ) - - def forward(self, *args): - img = args[0] - img = self.norm_rgb(img) - score, coord, desc = self.interestpoint_module(img) - return score, coord, desc diff --git a/spaces/Ricecake123/RVC-demo/train/losses.py b/spaces/Ricecake123/RVC-demo/train/losses.py deleted file mode 100644 index b89038f14d06d7fae43628183e9ffb465e4edafd..0000000000000000000000000000000000000000 --- a/spaces/Ricecake123/RVC-demo/train/losses.py +++ /dev/null @@ -1,59 +0,0 @@ -import torch -from torch.nn import functional as F - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg**2) - loss += r_loss + g_loss - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p) ** 2) * torch.exp(-2.0 * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/RickyMartin-dev/Text_to_Image_Diffusion/app.py b/spaces/RickyMartin-dev/Text_to_Image_Diffusion/app.py deleted file mode 100644 index 313a545af2ec2ec1a2eedc23042fb34d14440050..0000000000000000000000000000000000000000 --- a/spaces/RickyMartin-dev/Text_to_Image_Diffusion/app.py +++ /dev/null @@ -1,19 +0,0 @@ -# Imports -from text_to_image import TextToImageTool -import gradio as gr - -# Define Text to Image Tool -tool = TextToImageTool() - -# Helper Function, necessary for Gradio -def fn(*args, **kwargs): - return tool(*args, **kwargs) - -# Gradio Interface -gr.Interface( - fn=fn, - inputs=tool.inputs, - outputs=tool.outputs, - title="TextToImageTool", - article=tool.description, -).queue(concurrency_count=5).launch() \ No newline at end of file diff --git a/spaces/Riksarkivet/htr_demo/models/RmtDet_regions/rtmdet_m_textregions_2_concat.py b/spaces/Riksarkivet/htr_demo/models/RmtDet_regions/rtmdet_m_textregions_2_concat.py deleted file mode 100644 index 5866a05825422c7af6451962bff4c874ffe51ff5..0000000000000000000000000000000000000000 --- a/spaces/Riksarkivet/htr_demo/models/RmtDet_regions/rtmdet_m_textregions_2_concat.py +++ /dev/null @@ -1,380 +0,0 @@ -default_scope = "mmdet" -default_hooks = dict( - timer=dict(type="IterTimerHook"), - logger=dict(type="LoggerHook", interval=100), - param_scheduler=dict(type="ParamSchedulerHook"), - checkpoint=dict(type="CheckpointHook", interval=1, max_keep_ckpts=5, save_best="auto"), - sampler_seed=dict(type="DistSamplerSeedHook"), - visualization=dict(type="DetVisualizationHook"), -) -env_cfg = dict(cudnn_benchmark=False, mp_cfg=dict(mp_start_method="fork", opencv_num_threads=0), dist_cfg=dict(backend="nccl")) -vis_backends = [dict(type="LocalVisBackend")] -visualizer = dict(type="DetLocalVisualizer", vis_backends=[dict(type="LocalVisBackend")], name="visualizer", save_dir="./") -log_processor = dict(type="LogProcessor", window_size=50, by_epoch=True) -log_level = "INFO" -load_from = "/home/erik/Riksarkivet/Projects/HTR_Pipeline/models/checkpoints/rtmdet_regions_6/epoch_11.pth" -resume = True -train_cfg = dict(type="EpochBasedTrainLoop", max_epochs=12, val_interval=12, dynamic_intervals=[(10, 1)]) -val_cfg = dict(type="ValLoop") -test_cfg = dict( - type="TestLoop", - pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="Resize", scale=(640, 640), keep_ratio=True), - dict(type="Pad", size=(640, 640), pad_val=dict(img=(114, 114, 114))), - dict(type="PackDetInputs", meta_keys=("img_id", "img_path", "ori_shape", "img_shape", "scale_factor")), - ], -) -param_scheduler = [ - dict(type="LinearLR", start_factor=1e-05, by_epoch=False, begin=0, end=1000), - dict(type="CosineAnnealingLR", eta_min=1.25e-05, begin=6, end=12, T_max=6, by_epoch=True, convert_to_iter_based=True), -] -optim_wrapper = dict( - type="OptimWrapper", - optimizer=dict(type="AdamW", lr=0.00025, weight_decay=0.05), - paramwise_cfg=dict(norm_decay_mult=0, bias_decay_mult=0, bypass_duplicate=True), -) -auto_scale_lr = dict(enable=False, base_batch_size=16) -dataset_type = "CocoDataset" -data_root = "data/coco/" -file_client_args = dict(backend="disk") -train_pipeline = [ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="LoadAnnotations", with_bbox=True, with_mask=True, poly2mask=False), - dict(type="CachedMosaic", img_scale=(640, 640), pad_val=114.0), - dict(type="RandomResize", scale=(1280, 1280), ratio_range=(0.1, 2.0), keep_ratio=True), - dict(type="RandomCrop", crop_size=(640, 640), recompute_bbox=True, allow_negative_crop=True), - dict(type="YOLOXHSVRandomAug"), - dict(type="RandomFlip", prob=0.5), - dict(type="Pad", size=(640, 640), pad_val=dict(img=(114, 114, 114))), - dict(type="CachedMixUp", img_scale=(640, 640), ratio_range=(1.0, 1.0), max_cached_images=20, pad_val=(114, 114, 114)), - dict(type="FilterAnnotations", min_gt_bbox_wh=(1, 1)), - dict(type="PackDetInputs"), -] -test_pipeline = [ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="Resize", scale=(640, 640), keep_ratio=True), - dict(type="Pad", size=(640, 640), pad_val=dict(img=(114, 114, 114))), - dict(type="PackDetInputs", meta_keys=("img_id", "img_path", "ori_shape", "img_shape", "scale_factor")), -] -tta_model = dict(type="DetTTAModel", tta_cfg=dict(nms=dict(type="nms", iou_threshold=0.6), max_per_img=100)) -img_scales = [(640, 640), (320, 320), (960, 960)] -tta_pipeline = [ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict( - type="TestTimeAug", - transforms=[ - [ - {"type": "Resize", "scale": (640, 640), "keep_ratio": True}, - {"type": "Resize", "scale": (320, 320), "keep_ratio": True}, - {"type": "Resize", "scale": (960, 960), "keep_ratio": True}, - ], - [{"type": "RandomFlip", "prob": 1.0}, {"type": "RandomFlip", "prob": 0.0}], - [{"type": "Pad", "size": (960, 960), "pad_val": {"img": (114, 114, 114)}}], - [ - { - "type": "PackDetInputs", - "meta_keys": ("img_id", "img_path", "ori_shape", "img_shape", "scale_factor", "flip", "flip_direction"), - } - ], - ], - ), -] -model = dict( - type="RTMDet", - data_preprocessor=dict( - type="DetDataPreprocessor", mean=[103.53, 116.28, 123.675], std=[57.375, 57.12, 58.395], bgr_to_rgb=False, batch_augments=None - ), - backbone=dict( - type="CSPNeXt", - arch="P5", - expand_ratio=0.5, - deepen_factor=0.67, - widen_factor=0.75, - channel_attention=True, - norm_cfg=dict(type="SyncBN"), - act_cfg=dict(type="SiLU", inplace=True), - ), - neck=dict( - type="CSPNeXtPAFPN", - in_channels=[192, 384, 768], - out_channels=192, - num_csp_blocks=2, - expand_ratio=0.5, - norm_cfg=dict(type="SyncBN"), - act_cfg=dict(type="SiLU", inplace=True), - ), - bbox_head=dict( - type="RTMDetInsSepBNHead", - num_classes=80, - in_channels=192, - stacked_convs=2, - share_conv=True, - pred_kernel_size=1, - feat_channels=192, - act_cfg=dict(type="SiLU", inplace=True), - norm_cfg=dict(type="SyncBN", requires_grad=True), - anchor_generator=dict(type="MlvlPointGenerator", offset=0, strides=[8, 16, 32]), - bbox_coder=dict(type="DistancePointBBoxCoder"), - loss_cls=dict(type="QualityFocalLoss", use_sigmoid=True, beta=2.0, loss_weight=1.0), - loss_bbox=dict(type="GIoULoss", loss_weight=2.0), - loss_mask=dict(type="DiceLoss", loss_weight=2.0, eps=5e-06, reduction="mean"), - ), - train_cfg=dict(assigner=dict(type="DynamicSoftLabelAssigner", topk=13), allowed_border=-1, pos_weight=-1, debug=False), - test_cfg=dict(nms_pre=200, min_bbox_size=0, score_thr=0.4, nms=dict(type="nms", iou_threshold=0.6), max_per_img=50, mask_thr_binary=0.5), -) -train_pipeline_stage2 = [ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="LoadAnnotations", with_bbox=True, with_mask=True, poly2mask=False), - dict(type="RandomResize", scale=(640, 640), ratio_range=(0.1, 2.0), keep_ratio=True), - dict(type="RandomCrop", crop_size=(640, 640), recompute_bbox=True, allow_negative_crop=True), - dict(type="FilterAnnotations", min_gt_bbox_wh=(1, 1)), - dict(type="YOLOXHSVRandomAug"), - dict(type="RandomFlip", prob=0.5), - dict(type="Pad", size=(640, 640), pad_val=dict(img=(114, 114, 114))), - dict(type="PackDetInputs"), -] -train_dataloader = dict( - batch_size=2, - num_workers=1, - batch_sampler=None, - pin_memory=True, - persistent_workers=True, - sampler=dict(type="DefaultSampler", shuffle=True), - dataset=dict( - type="ConcatDataset", - datasets=[ - dict( - type="CocoDataset", - metainfo=dict(classes="TextRegion", palette=[(220, 20, 60)]), - data_prefix=dict(img="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/police_records/"), - ann_file="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/police_records/gt_files/coco_regions2.json", - pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="LoadAnnotations", with_bbox=True, with_mask=True, poly2mask=False), - dict(type="CachedMosaic", img_scale=(640, 640), pad_val=114.0), - dict(type="RandomResize", scale=(1280, 1280), ratio_range=(0.1, 2.0), keep_ratio=True), - dict(type="RandomCrop", crop_size=(640, 640), recompute_bbox=True, allow_negative_crop=True), - dict(type="YOLOXHSVRandomAug"), - dict(type="RandomFlip", prob=0.5), - dict(type="Pad", size=(640, 640), pad_val=dict(img=(114, 114, 114))), - dict(type="CachedMixUp", img_scale=(640, 640), ratio_range=(1.0, 1.0), max_cached_images=20, pad_val=(114, 114, 114)), - dict(type="FilterAnnotations", min_gt_bbox_wh=(1, 1)), - dict(type="PackDetInputs"), - ], - ), - dict( - type="CocoDataset", - metainfo=dict(classes="TextRegion", palette=[(220, 20, 60)]), - data_prefix=dict(img="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/ICDAR-2019/clean/"), - ann_file="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/ICDAR-2019/clean/gt_files/coco_regions2.json", - pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="LoadAnnotations", with_bbox=True, with_mask=True, poly2mask=False), - dict(type="CachedMosaic", img_scale=(640, 640), pad_val=114.0), - dict(type="RandomResize", scale=(1280, 1280), ratio_range=(0.1, 2.0), keep_ratio=True), - dict(type="RandomCrop", crop_size=(640, 640), recompute_bbox=True, allow_negative_crop=True), - dict(type="YOLOXHSVRandomAug"), - dict(type="RandomFlip", prob=0.5), - dict(type="Pad", size=(640, 640), pad_val=dict(img=(114, 114, 114))), - dict(type="CachedMixUp", img_scale=(640, 640), ratio_range=(1.0, 1.0), max_cached_images=20, pad_val=(114, 114, 114)), - dict(type="FilterAnnotations", min_gt_bbox_wh=(1, 1)), - dict(type="PackDetInputs"), - ], - ), - ], - ), -) -val_dataloader = dict( - batch_size=1, - num_workers=10, - dataset=dict( - pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="Resize", scale=(640, 640), keep_ratio=True), - dict(type="Pad", size=(640, 640), pad_val=dict(img=(114, 114, 114))), - dict(type="PackDetInputs", meta_keys=("img_id", "img_path", "ori_shape", "img_shape", "scale_factor")), - ], - type="CocoDataset", - metainfo=dict(classes="TextRegion", palette=[(220, 20, 60)]), - data_prefix=dict(img="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/ICDAR-2019/clean/"), - ann_file="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/police_records/gt_files/coco_regions2.json", - test_mode=True, - ), - persistent_workers=True, - drop_last=False, - sampler=dict(type="DefaultSampler", shuffle=False), -) -test_dataloader = dict( - batch_size=1, - num_workers=10, - dataset=dict( - pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="Resize", scale=(640, 640), keep_ratio=True), - dict(type="Pad", size=(640, 640), pad_val=dict(img=(114, 114, 114))), - dict(type="PackDetInputs", meta_keys=("img_id", "img_path", "ori_shape", "img_shape", "scale_factor")), - ], - type="CocoDataset", - metainfo=dict(classes="TextRegion", palette=[(220, 20, 60)]), - data_prefix=dict(img="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/ICDAR-2019/clean/"), - ann_file="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/police_records/gt_files/coco_regions2.json", - test_mode=True, - ), - persistent_workers=True, - drop_last=False, - sampler=dict(type="DefaultSampler", shuffle=False), -) -max_epochs = 12 -stage2_num_epochs = 2 -base_lr = 0.00025 -interval = 12 -val_evaluator = dict( - proposal_nums=(100, 1, 10), - metric=["bbox", "segm"], - type="CocoMetric", - ann_file="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/ICDAR-2019/clean/gt_files/coco_regions2.json", -) -test_evaluator = dict( - proposal_nums=(100, 1, 10), - metric=["bbox", "segm"], - type="CocoMetric", - ann_file="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/ICDAR-2019/clean/gt_files/coco_regions2.json", -) -custom_hooks = [ - dict(type="EMAHook", ema_type="ExpMomentumEMA", momentum=0.0002, update_buffers=True, priority=49), - dict( - type="PipelineSwitchHook", - switch_epoch=10, - switch_pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="LoadAnnotations", with_bbox=True, with_mask=True, poly2mask=False), - dict(type="RandomResize", scale=(640, 640), ratio_range=(0.1, 2.0), keep_ratio=True), - dict(type="RandomCrop", crop_size=(640, 640), recompute_bbox=True, allow_negative_crop=True), - dict(type="FilterAnnotations", min_gt_bbox_wh=(1, 1)), - dict(type="YOLOXHSVRandomAug"), - dict(type="RandomFlip", prob=0.5), - dict(type="Pad", size=(640, 640), pad_val=dict(img=(114, 114, 114))), - dict(type="PackDetInputs"), - ], - ), -] -work_dir = "/home/erik/Riksarkivet/Projects/HTR_Pipeline/models/checkpoints/rtmdet_regions_6" -train_batch_size_per_gpu = 2 -val_batch_size_per_gpu = 1 -train_num_workers = 1 -num_classes = 1 -metainfo = dict(classes="TextRegion", palette=[(220, 20, 60)]) -icdar_2019 = dict( - type="CocoDataset", - metainfo=dict(classes="TextRegion", palette=[(220, 20, 60)]), - data_prefix=dict(img="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/ICDAR-2019/clean/"), - ann_file="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/ICDAR-2019/clean/gt_files/coco_regions2.json", - pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="LoadAnnotations", with_bbox=True, with_mask=True, poly2mask=False), - dict(type="CachedMosaic", img_scale=(640, 640), pad_val=114.0), - dict(type="RandomResize", scale=(1280, 1280), ratio_range=(0.1, 2.0), keep_ratio=True), - dict(type="RandomCrop", crop_size=(640, 640), recompute_bbox=True, allow_negative_crop=True), - dict(type="YOLOXHSVRandomAug"), - dict(type="RandomFlip", prob=0.5), - dict(type="Pad", size=(640, 640), pad_val=dict(img=(114, 114, 114))), - dict(type="CachedMixUp", img_scale=(640, 640), ratio_range=(1.0, 1.0), max_cached_images=20, pad_val=(114, 114, 114)), - dict(type="FilterAnnotations", min_gt_bbox_wh=(1, 1)), - dict(type="PackDetInputs"), - ], -) -icdar_2019_test = dict( - type="CocoDataset", - metainfo=dict(classes="TextRegion", palette=[(220, 20, 60)]), - data_prefix=dict(img="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/ICDAR-2019/clean/"), - ann_file="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/ICDAR-2019/clean/gt_files/coco_regions2.json", - test_mode=True, - pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="Resize", scale=(640, 640), keep_ratio=True), - dict(type="Pad", size=(640, 640), pad_val=dict(img=(114, 114, 114))), - dict(type="PackDetInputs", meta_keys=("img_id", "img_path", "ori_shape", "img_shape", "scale_factor")), - ], -) -police_records = dict( - type="CocoDataset", - metainfo=dict(classes="TextRegion", palette=[(220, 20, 60)]), - data_prefix=dict(img="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/police_records/"), - ann_file="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/police_records/gt_files/coco_regions2.json", - pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="LoadAnnotations", with_bbox=True, with_mask=True, poly2mask=False), - dict(type="CachedMosaic", img_scale=(640, 640), pad_val=114.0), - dict(type="RandomResize", scale=(1280, 1280), ratio_range=(0.1, 2.0), keep_ratio=True), - dict(type="RandomCrop", crop_size=(640, 640), recompute_bbox=True, allow_negative_crop=True), - dict(type="YOLOXHSVRandomAug"), - dict(type="RandomFlip", prob=0.5), - dict(type="Pad", size=(640, 640), pad_val=dict(img=(114, 114, 114))), - dict(type="CachedMixUp", img_scale=(640, 640), ratio_range=(1.0, 1.0), max_cached_images=20, pad_val=(114, 114, 114)), - dict(type="FilterAnnotations", min_gt_bbox_wh=(1, 1)), - dict(type="PackDetInputs"), - ], -) -train_list = [ - dict( - type="CocoDataset", - metainfo=dict(classes="TextRegion", palette=[(220, 20, 60)]), - data_prefix=dict(img="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/police_records/"), - ann_file="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/police_records/gt_files/coco_regions2.json", - pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="LoadAnnotations", with_bbox=True, with_mask=True, poly2mask=False), - dict(type="CachedMosaic", img_scale=(640, 640), pad_val=114.0), - dict(type="RandomResize", scale=(1280, 1280), ratio_range=(0.1, 2.0), keep_ratio=True), - dict(type="RandomCrop", crop_size=(640, 640), recompute_bbox=True, allow_negative_crop=True), - dict(type="YOLOXHSVRandomAug"), - dict(type="RandomFlip", prob=0.5), - dict(type="Pad", size=(640, 640), pad_val=dict(img=(114, 114, 114))), - dict(type="CachedMixUp", img_scale=(640, 640), ratio_range=(1.0, 1.0), max_cached_images=20, pad_val=(114, 114, 114)), - dict(type="FilterAnnotations", min_gt_bbox_wh=(1, 1)), - dict(type="PackDetInputs"), - ], - ), - dict( - type="CocoDataset", - metainfo=dict(classes="TextRegion", palette=[(220, 20, 60)]), - data_prefix=dict(img="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/ICDAR-2019/clean/"), - ann_file="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/ICDAR-2019/clean/gt_files/coco_regions2.json", - pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="LoadAnnotations", with_bbox=True, with_mask=True, poly2mask=False), - dict(type="CachedMosaic", img_scale=(640, 640), pad_val=114.0), - dict(type="RandomResize", scale=(1280, 1280), ratio_range=(0.1, 2.0), keep_ratio=True), - dict(type="RandomCrop", crop_size=(640, 640), recompute_bbox=True, allow_negative_crop=True), - dict(type="YOLOXHSVRandomAug"), - dict(type="RandomFlip", prob=0.5), - dict(type="Pad", size=(640, 640), pad_val=dict(img=(114, 114, 114))), - dict(type="CachedMixUp", img_scale=(640, 640), ratio_range=(1.0, 1.0), max_cached_images=20, pad_val=(114, 114, 114)), - dict(type="FilterAnnotations", min_gt_bbox_wh=(1, 1)), - dict(type="PackDetInputs"), - ], - ), -] -test_list = [ - dict( - type="CocoDataset", - metainfo=dict(classes="TextRegion", palette=[(220, 20, 60)]), - data_prefix=dict(img="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/ICDAR-2019/clean/"), - ann_file="/media/erik/Elements/Riksarkivet/data/datasets/htr/segmentation/ICDAR-2019/clean/gt_files/coco_regions2.json", - test_mode=True, - pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="Resize", scale=(640, 640), keep_ratio=True), - dict(type="Pad", size=(640, 640), pad_val=dict(img=(114, 114, 114))), - dict(type="PackDetInputs", meta_keys=("img_id", "img_path", "ori_shape", "img_shape", "scale_factor")), - ], - ) -] -pipeline = [ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="Resize", scale=(640, 640), keep_ratio=True), - dict(type="Pad", size=(640, 640), pad_val=dict(img=(114, 114, 114))), - dict(type="PackDetInputs", meta_keys=("img_id", "img_path", "ori_shape", "img_shape", "scale_factor")), -] -launcher = "pytorch" diff --git a/spaces/Ritori/play_with_baby_llama2/sample_data/README.md b/spaces/Ritori/play_with_baby_llama2/sample_data/README.md deleted file mode 100644 index e46cdae34844234bc75daeefda03a47aa7f19516..0000000000000000000000000000000000000000 --- a/spaces/Ritori/play_with_baby_llama2/sample_data/README.md +++ /dev/null @@ -1,19 +0,0 @@ -This directory includes a few sample datasets to get you started. - -* `california_housing_data*.csv` is California housing data from the 1990 US - Census; more information is available at: - https://developers.google.com/machine-learning/crash-course/california-housing-data-description - -* `mnist_*.csv` is a small sample of the - [MNIST database](https://en.wikipedia.org/wiki/MNIST_database), which is - described at: http://yann.lecun.com/exdb/mnist/ - -* `anscombe.json` contains a copy of - [Anscombe's quartet](https://en.wikipedia.org/wiki/Anscombe%27s_quartet); it - was originally described in - - Anscombe, F. J. (1973). 'Graphs in Statistical Analysis'. American - Statistician. 27 (1): 17-21. JSTOR 2682899. - - and our copy was prepared by the - [vega_datasets library](https://github.com/altair-viz/vega_datasets/blob/4f67bdaad10f45e3549984e17e1b3088c731503d/vega_datasets/_data/anscombe.json). diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/utils/fuse_conv_bn.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/utils/fuse_conv_bn.py deleted file mode 100644 index cb7076f80bf37f7931185bf0293ffcc1ce19c8ef..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/utils/fuse_conv_bn.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - - -def _fuse_conv_bn(conv, bn): - """Fuse conv and bn into one module. - - Args: - conv (nn.Module): Conv to be fused. - bn (nn.Module): BN to be fused. - - Returns: - nn.Module: Fused module. - """ - conv_w = conv.weight - conv_b = conv.bias if conv.bias is not None else torch.zeros_like( - bn.running_mean) - - factor = bn.weight / torch.sqrt(bn.running_var + bn.eps) - conv.weight = nn.Parameter(conv_w * - factor.reshape([conv.out_channels, 1, 1, 1])) - conv.bias = nn.Parameter((conv_b - bn.running_mean) * factor + bn.bias) - return conv - - -def fuse_conv_bn(module): - """Recursively fuse conv and bn in a module. - - During inference, the functionary of batch norm layers is turned off - but only the mean and var alone channels are used, which exposes the - chance to fuse it with the preceding conv layers to save computations and - simplify network structures. - - Args: - module (nn.Module): Module to be fused. - - Returns: - nn.Module: Fused module. - """ - last_conv = None - last_conv_name = None - - for name, child in module.named_children(): - if isinstance(child, - (nn.modules.batchnorm._BatchNorm, nn.SyncBatchNorm)): - if last_conv is None: # only fuse BN that is after Conv - continue - fused_conv = _fuse_conv_bn(last_conv, child) - module._modules[last_conv_name] = fused_conv - # To reduce changes, set BN as Identity instead of deleting it. - module._modules[name] = nn.Identity() - last_conv = None - elif isinstance(child, nn.Conv2d): - last_conv = child - last_conv_name = name - else: - fuse_conv_bn(child) - return module diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/mask/structures.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/mask/structures.py deleted file mode 100644 index d9ec5775f281ab8b76cb873e71a4edd9969ab905..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/mask/structures.py +++ /dev/null @@ -1,1024 +0,0 @@ -from abc import ABCMeta, abstractmethod - -import cv2 -import mmcv -import numpy as np -import pycocotools.mask as maskUtils -import torch -from mmcv.ops.roi_align import roi_align - - -class BaseInstanceMasks(metaclass=ABCMeta): - """Base class for instance masks.""" - - @abstractmethod - def rescale(self, scale, interpolation='nearest'): - """Rescale masks as large as possible while keeping the aspect ratio. - For details can refer to `mmcv.imrescale`. - - Args: - scale (tuple[int]): The maximum size (h, w) of rescaled mask. - interpolation (str): Same as :func:`mmcv.imrescale`. - - Returns: - BaseInstanceMasks: The rescaled masks. - """ - - @abstractmethod - def resize(self, out_shape, interpolation='nearest'): - """Resize masks to the given out_shape. - - Args: - out_shape: Target (h, w) of resized mask. - interpolation (str): See :func:`mmcv.imresize`. - - Returns: - BaseInstanceMasks: The resized masks. - """ - - @abstractmethod - def flip(self, flip_direction='horizontal'): - """Flip masks alone the given direction. - - Args: - flip_direction (str): Either 'horizontal' or 'vertical'. - - Returns: - BaseInstanceMasks: The flipped masks. - """ - - @abstractmethod - def pad(self, out_shape, pad_val): - """Pad masks to the given size of (h, w). - - Args: - out_shape (tuple[int]): Target (h, w) of padded mask. - pad_val (int): The padded value. - - Returns: - BaseInstanceMasks: The padded masks. - """ - - @abstractmethod - def crop(self, bbox): - """Crop each mask by the given bbox. - - Args: - bbox (ndarray): Bbox in format [x1, y1, x2, y2], shape (4, ). - - Return: - BaseInstanceMasks: The cropped masks. - """ - - @abstractmethod - def crop_and_resize(self, - bboxes, - out_shape, - inds, - device, - interpolation='bilinear'): - """Crop and resize masks by the given bboxes. - - This function is mainly used in mask targets computation. - It firstly align mask to bboxes by assigned_inds, then crop mask by the - assigned bbox and resize to the size of (mask_h, mask_w) - - Args: - bboxes (Tensor): Bboxes in format [x1, y1, x2, y2], shape (N, 4) - out_shape (tuple[int]): Target (h, w) of resized mask - inds (ndarray): Indexes to assign masks to each bbox, - shape (N,) and values should be between [0, num_masks - 1]. - device (str): Device of bboxes - interpolation (str): See `mmcv.imresize` - - Return: - BaseInstanceMasks: the cropped and resized masks. - """ - - @abstractmethod - def expand(self, expanded_h, expanded_w, top, left): - """see :class:`Expand`.""" - - @property - @abstractmethod - def areas(self): - """ndarray: areas of each instance.""" - - @abstractmethod - def to_ndarray(self): - """Convert masks to the format of ndarray. - - Return: - ndarray: Converted masks in the format of ndarray. - """ - - @abstractmethod - def to_tensor(self, dtype, device): - """Convert masks to the format of Tensor. - - Args: - dtype (str): Dtype of converted mask. - device (torch.device): Device of converted masks. - - Returns: - Tensor: Converted masks in the format of Tensor. - """ - - @abstractmethod - def translate(self, - out_shape, - offset, - direction='horizontal', - fill_val=0, - interpolation='bilinear'): - """Translate the masks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - offset (int | float): The offset for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - fill_val (int | float): Border value. Default 0. - interpolation (str): Same as :func:`mmcv.imtranslate`. - - Returns: - Translated masks. - """ - - def shear(self, - out_shape, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Shear the masks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - magnitude (int | float): The magnitude used for shear. - direction (str): The shear direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. Default 0. - interpolation (str): Same as in :func:`mmcv.imshear`. - - Returns: - ndarray: Sheared masks. - """ - - @abstractmethod - def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0): - """Rotate the masks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - angle (int | float): Rotation angle in degrees. Positive values - mean counter-clockwise rotation. - center (tuple[float], optional): Center point (w, h) of the - rotation in source image. If not specified, the center of - the image will be used. - scale (int | float): Isotropic scale factor. - fill_val (int | float): Border value. Default 0 for masks. - - Returns: - Rotated masks. - """ - - -class BitmapMasks(BaseInstanceMasks): - """This class represents masks in the form of bitmaps. - - Args: - masks (ndarray): ndarray of masks in shape (N, H, W), where N is - the number of objects. - height (int): height of masks - width (int): width of masks - - Example: - >>> from mmdet.core.mask.structures import * # NOQA - >>> num_masks, H, W = 3, 32, 32 - >>> rng = np.random.RandomState(0) - >>> masks = (rng.rand(num_masks, H, W) > 0.1).astype(np.int) - >>> self = BitmapMasks(masks, height=H, width=W) - - >>> # demo crop_and_resize - >>> num_boxes = 5 - >>> bboxes = np.array([[0, 0, 30, 10.0]] * num_boxes) - >>> out_shape = (14, 14) - >>> inds = torch.randint(0, len(self), size=(num_boxes,)) - >>> device = 'cpu' - >>> interpolation = 'bilinear' - >>> new = self.crop_and_resize( - ... bboxes, out_shape, inds, device, interpolation) - >>> assert len(new) == num_boxes - >>> assert new.height, new.width == out_shape - """ - - def __init__(self, masks, height, width): - self.height = height - self.width = width - if len(masks) == 0: - self.masks = np.empty((0, self.height, self.width), dtype=np.uint8) - else: - assert isinstance(masks, (list, np.ndarray)) - if isinstance(masks, list): - assert isinstance(masks[0], np.ndarray) - assert masks[0].ndim == 2 # (H, W) - else: - assert masks.ndim == 3 # (N, H, W) - - self.masks = np.stack(masks).reshape(-1, height, width) - assert self.masks.shape[1] == self.height - assert self.masks.shape[2] == self.width - - def __getitem__(self, index): - """Index the BitmapMask. - - Args: - index (int | ndarray): Indices in the format of integer or ndarray. - - Returns: - :obj:`BitmapMasks`: Indexed bitmap masks. - """ - masks = self.masks[index].reshape(-1, self.height, self.width) - return BitmapMasks(masks, self.height, self.width) - - def __iter__(self): - return iter(self.masks) - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += f'num_masks={len(self.masks)}, ' - s += f'height={self.height}, ' - s += f'width={self.width})' - return s - - def __len__(self): - """Number of masks.""" - return len(self.masks) - - def rescale(self, scale, interpolation='nearest'): - """See :func:`BaseInstanceMasks.rescale`.""" - if len(self.masks) == 0: - new_w, new_h = mmcv.rescale_size((self.width, self.height), scale) - rescaled_masks = np.empty((0, new_h, new_w), dtype=np.uint8) - else: - rescaled_masks = np.stack([ - mmcv.imrescale(mask, scale, interpolation=interpolation) - for mask in self.masks - ]) - height, width = rescaled_masks.shape[1:] - return BitmapMasks(rescaled_masks, height, width) - - def resize(self, out_shape, interpolation='nearest'): - """See :func:`BaseInstanceMasks.resize`.""" - if len(self.masks) == 0: - resized_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - resized_masks = np.stack([ - mmcv.imresize( - mask, out_shape[::-1], interpolation=interpolation) - for mask in self.masks - ]) - return BitmapMasks(resized_masks, *out_shape) - - def flip(self, flip_direction='horizontal'): - """See :func:`BaseInstanceMasks.flip`.""" - assert flip_direction in ('horizontal', 'vertical', 'diagonal') - - if len(self.masks) == 0: - flipped_masks = self.masks - else: - flipped_masks = np.stack([ - mmcv.imflip(mask, direction=flip_direction) - for mask in self.masks - ]) - return BitmapMasks(flipped_masks, self.height, self.width) - - def pad(self, out_shape, pad_val=0): - """See :func:`BaseInstanceMasks.pad`.""" - if len(self.masks) == 0: - padded_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - padded_masks = np.stack([ - mmcv.impad(mask, shape=out_shape, pad_val=pad_val) - for mask in self.masks - ]) - return BitmapMasks(padded_masks, *out_shape) - - def crop(self, bbox): - """See :func:`BaseInstanceMasks.crop`.""" - assert isinstance(bbox, np.ndarray) - assert bbox.ndim == 1 - - # clip the boundary - bbox = bbox.copy() - bbox[0::2] = np.clip(bbox[0::2], 0, self.width) - bbox[1::2] = np.clip(bbox[1::2], 0, self.height) - x1, y1, x2, y2 = bbox - w = np.maximum(x2 - x1, 1) - h = np.maximum(y2 - y1, 1) - - if len(self.masks) == 0: - cropped_masks = np.empty((0, h, w), dtype=np.uint8) - else: - cropped_masks = self.masks[:, y1:y1 + h, x1:x1 + w] - return BitmapMasks(cropped_masks, h, w) - - def crop_and_resize(self, - bboxes, - out_shape, - inds, - device='cpu', - interpolation='bilinear'): - """See :func:`BaseInstanceMasks.crop_and_resize`.""" - if len(self.masks) == 0: - empty_masks = np.empty((0, *out_shape), dtype=np.uint8) - return BitmapMasks(empty_masks, *out_shape) - - # convert bboxes to tensor - if isinstance(bboxes, np.ndarray): - bboxes = torch.from_numpy(bboxes).to(device=device) - if isinstance(inds, np.ndarray): - inds = torch.from_numpy(inds).to(device=device) - - num_bbox = bboxes.shape[0] - fake_inds = torch.arange( - num_bbox, device=device).to(dtype=bboxes.dtype)[:, None] - rois = torch.cat([fake_inds, bboxes], dim=1) # Nx5 - rois = rois.to(device=device) - if num_bbox > 0: - gt_masks_th = torch.from_numpy(self.masks).to(device).index_select( - 0, inds).to(dtype=rois.dtype) - targets = roi_align(gt_masks_th[:, None, :, :], rois, out_shape, - 1.0, 0, 'avg', True).squeeze(1) - resized_masks = (targets >= 0.5).cpu().numpy() - else: - resized_masks = [] - return BitmapMasks(resized_masks, *out_shape) - - def expand(self, expanded_h, expanded_w, top, left): - """See :func:`BaseInstanceMasks.expand`.""" - if len(self.masks) == 0: - expanded_mask = np.empty((0, expanded_h, expanded_w), - dtype=np.uint8) - else: - expanded_mask = np.zeros((len(self), expanded_h, expanded_w), - dtype=np.uint8) - expanded_mask[:, top:top + self.height, - left:left + self.width] = self.masks - return BitmapMasks(expanded_mask, expanded_h, expanded_w) - - def translate(self, - out_shape, - offset, - direction='horizontal', - fill_val=0, - interpolation='bilinear'): - """Translate the BitmapMasks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - offset (int | float): The offset for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - fill_val (int | float): Border value. Default 0 for masks. - interpolation (str): Same as :func:`mmcv.imtranslate`. - - Returns: - BitmapMasks: Translated BitmapMasks. - - Example: - >>> from mmdet.core.mask.structures import BitmapMasks - >>> self = BitmapMasks.random(dtype=np.uint8) - >>> out_shape = (32, 32) - >>> offset = 4 - >>> direction = 'horizontal' - >>> fill_val = 0 - >>> interpolation = 'bilinear' - >>> # Note, There seem to be issues when: - >>> # * out_shape is different than self's shape - >>> # * the mask dtype is not supported by cv2.AffineWarp - >>> new = self.translate(out_shape, offset, direction, fill_val, - >>> interpolation) - >>> assert len(new) == len(self) - >>> assert new.height, new.width == out_shape - """ - if len(self.masks) == 0: - translated_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - translated_masks = mmcv.imtranslate( - self.masks.transpose((1, 2, 0)), - offset, - direction, - border_value=fill_val, - interpolation=interpolation) - if translated_masks.ndim == 2: - translated_masks = translated_masks[:, :, None] - translated_masks = translated_masks.transpose( - (2, 0, 1)).astype(self.masks.dtype) - return BitmapMasks(translated_masks, *out_shape) - - def shear(self, - out_shape, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Shear the BitmapMasks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - magnitude (int | float): The magnitude used for shear. - direction (str): The shear direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as in :func:`mmcv.imshear`. - - Returns: - BitmapMasks: The sheared masks. - """ - if len(self.masks) == 0: - sheared_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - sheared_masks = mmcv.imshear( - self.masks.transpose((1, 2, 0)), - magnitude, - direction, - border_value=border_value, - interpolation=interpolation) - if sheared_masks.ndim == 2: - sheared_masks = sheared_masks[:, :, None] - sheared_masks = sheared_masks.transpose( - (2, 0, 1)).astype(self.masks.dtype) - return BitmapMasks(sheared_masks, *out_shape) - - def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0): - """Rotate the BitmapMasks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - angle (int | float): Rotation angle in degrees. Positive values - mean counter-clockwise rotation. - center (tuple[float], optional): Center point (w, h) of the - rotation in source image. If not specified, the center of - the image will be used. - scale (int | float): Isotropic scale factor. - fill_val (int | float): Border value. Default 0 for masks. - - Returns: - BitmapMasks: Rotated BitmapMasks. - """ - if len(self.masks) == 0: - rotated_masks = np.empty((0, *out_shape), dtype=self.masks.dtype) - else: - rotated_masks = mmcv.imrotate( - self.masks.transpose((1, 2, 0)), - angle, - center=center, - scale=scale, - border_value=fill_val) - if rotated_masks.ndim == 2: - # case when only one mask, (h, w) - rotated_masks = rotated_masks[:, :, None] # (h, w, 1) - rotated_masks = rotated_masks.transpose( - (2, 0, 1)).astype(self.masks.dtype) - return BitmapMasks(rotated_masks, *out_shape) - - @property - def areas(self): - """See :py:attr:`BaseInstanceMasks.areas`.""" - return self.masks.sum((1, 2)) - - def to_ndarray(self): - """See :func:`BaseInstanceMasks.to_ndarray`.""" - return self.masks - - def to_tensor(self, dtype, device): - """See :func:`BaseInstanceMasks.to_tensor`.""" - return torch.tensor(self.masks, dtype=dtype, device=device) - - @classmethod - def random(cls, - num_masks=3, - height=32, - width=32, - dtype=np.uint8, - rng=None): - """Generate random bitmap masks for demo / testing purposes. - - Example: - >>> from mmdet.core.mask.structures import BitmapMasks - >>> self = BitmapMasks.random() - >>> print('self = {}'.format(self)) - self = BitmapMasks(num_masks=3, height=32, width=32) - """ - from mmdet.utils.util_random import ensure_rng - rng = ensure_rng(rng) - masks = (rng.rand(num_masks, height, width) > 0.1).astype(dtype) - self = cls(masks, height=height, width=width) - return self - - -class PolygonMasks(BaseInstanceMasks): - """This class represents masks in the form of polygons. - - Polygons is a list of three levels. The first level of the list - corresponds to objects, the second level to the polys that compose the - object, the third level to the poly coordinates - - Args: - masks (list[list[ndarray]]): The first level of the list - corresponds to objects, the second level to the polys that - compose the object, the third level to the poly coordinates - height (int): height of masks - width (int): width of masks - - Example: - >>> from mmdet.core.mask.structures import * # NOQA - >>> masks = [ - >>> [ np.array([0, 0, 10, 0, 10, 10., 0, 10, 0, 0]) ] - >>> ] - >>> height, width = 16, 16 - >>> self = PolygonMasks(masks, height, width) - - >>> # demo translate - >>> new = self.translate((16, 16), 4., direction='horizontal') - >>> assert np.all(new.masks[0][0][1::2] == masks[0][0][1::2]) - >>> assert np.all(new.masks[0][0][0::2] == masks[0][0][0::2] + 4) - - >>> # demo crop_and_resize - >>> num_boxes = 3 - >>> bboxes = np.array([[0, 0, 30, 10.0]] * num_boxes) - >>> out_shape = (16, 16) - >>> inds = torch.randint(0, len(self), size=(num_boxes,)) - >>> device = 'cpu' - >>> interpolation = 'bilinear' - >>> new = self.crop_and_resize( - ... bboxes, out_shape, inds, device, interpolation) - >>> assert len(new) == num_boxes - >>> assert new.height, new.width == out_shape - """ - - def __init__(self, masks, height, width): - assert isinstance(masks, list) - if len(masks) > 0: - assert isinstance(masks[0], list) - assert isinstance(masks[0][0], np.ndarray) - - self.height = height - self.width = width - self.masks = masks - - def __getitem__(self, index): - """Index the polygon masks. - - Args: - index (ndarray | List): The indices. - - Returns: - :obj:`PolygonMasks`: The indexed polygon masks. - """ - if isinstance(index, np.ndarray): - index = index.tolist() - if isinstance(index, list): - masks = [self.masks[i] for i in index] - else: - try: - masks = self.masks[index] - except Exception: - raise ValueError( - f'Unsupported input of type {type(index)} for indexing!') - if len(masks) and isinstance(masks[0], np.ndarray): - masks = [masks] # ensure a list of three levels - return PolygonMasks(masks, self.height, self.width) - - def __iter__(self): - return iter(self.masks) - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += f'num_masks={len(self.masks)}, ' - s += f'height={self.height}, ' - s += f'width={self.width})' - return s - - def __len__(self): - """Number of masks.""" - return len(self.masks) - - def rescale(self, scale, interpolation=None): - """see :func:`BaseInstanceMasks.rescale`""" - new_w, new_h = mmcv.rescale_size((self.width, self.height), scale) - if len(self.masks) == 0: - rescaled_masks = PolygonMasks([], new_h, new_w) - else: - rescaled_masks = self.resize((new_h, new_w)) - return rescaled_masks - - def resize(self, out_shape, interpolation=None): - """see :func:`BaseInstanceMasks.resize`""" - if len(self.masks) == 0: - resized_masks = PolygonMasks([], *out_shape) - else: - h_scale = out_shape[0] / self.height - w_scale = out_shape[1] / self.width - resized_masks = [] - for poly_per_obj in self.masks: - resized_poly = [] - for p in poly_per_obj: - p = p.copy() - p[0::2] *= w_scale - p[1::2] *= h_scale - resized_poly.append(p) - resized_masks.append(resized_poly) - resized_masks = PolygonMasks(resized_masks, *out_shape) - return resized_masks - - def flip(self, flip_direction='horizontal'): - """see :func:`BaseInstanceMasks.flip`""" - assert flip_direction in ('horizontal', 'vertical', 'diagonal') - if len(self.masks) == 0: - flipped_masks = PolygonMasks([], self.height, self.width) - else: - flipped_masks = [] - for poly_per_obj in self.masks: - flipped_poly_per_obj = [] - for p in poly_per_obj: - p = p.copy() - if flip_direction == 'horizontal': - p[0::2] = self.width - p[0::2] - elif flip_direction == 'vertical': - p[1::2] = self.height - p[1::2] - else: - p[0::2] = self.width - p[0::2] - p[1::2] = self.height - p[1::2] - flipped_poly_per_obj.append(p) - flipped_masks.append(flipped_poly_per_obj) - flipped_masks = PolygonMasks(flipped_masks, self.height, - self.width) - return flipped_masks - - def crop(self, bbox): - """see :func:`BaseInstanceMasks.crop`""" - assert isinstance(bbox, np.ndarray) - assert bbox.ndim == 1 - - # clip the boundary - bbox = bbox.copy() - bbox[0::2] = np.clip(bbox[0::2], 0, self.width) - bbox[1::2] = np.clip(bbox[1::2], 0, self.height) - x1, y1, x2, y2 = bbox - w = np.maximum(x2 - x1, 1) - h = np.maximum(y2 - y1, 1) - - if len(self.masks) == 0: - cropped_masks = PolygonMasks([], h, w) - else: - cropped_masks = [] - for poly_per_obj in self.masks: - cropped_poly_per_obj = [] - for p in poly_per_obj: - # pycocotools will clip the boundary - p = p.copy() - p[0::2] -= bbox[0] - p[1::2] -= bbox[1] - cropped_poly_per_obj.append(p) - cropped_masks.append(cropped_poly_per_obj) - cropped_masks = PolygonMasks(cropped_masks, h, w) - return cropped_masks - - def pad(self, out_shape, pad_val=0): - """padding has no effect on polygons`""" - return PolygonMasks(self.masks, *out_shape) - - def expand(self, *args, **kwargs): - """TODO: Add expand for polygon""" - raise NotImplementedError - - def crop_and_resize(self, - bboxes, - out_shape, - inds, - device='cpu', - interpolation='bilinear'): - """see :func:`BaseInstanceMasks.crop_and_resize`""" - out_h, out_w = out_shape - if len(self.masks) == 0: - return PolygonMasks([], out_h, out_w) - - resized_masks = [] - for i in range(len(bboxes)): - mask = self.masks[inds[i]] - bbox = bboxes[i, :] - x1, y1, x2, y2 = bbox - w = np.maximum(x2 - x1, 1) - h = np.maximum(y2 - y1, 1) - h_scale = out_h / max(h, 0.1) # avoid too large scale - w_scale = out_w / max(w, 0.1) - - resized_mask = [] - for p in mask: - p = p.copy() - # crop - # pycocotools will clip the boundary - p[0::2] -= bbox[0] - p[1::2] -= bbox[1] - - # resize - p[0::2] *= w_scale - p[1::2] *= h_scale - resized_mask.append(p) - resized_masks.append(resized_mask) - return PolygonMasks(resized_masks, *out_shape) - - def translate(self, - out_shape, - offset, - direction='horizontal', - fill_val=None, - interpolation=None): - """Translate the PolygonMasks. - - Example: - >>> self = PolygonMasks.random(dtype=np.int) - >>> out_shape = (self.height, self.width) - >>> new = self.translate(out_shape, 4., direction='horizontal') - >>> assert np.all(new.masks[0][0][1::2] == self.masks[0][0][1::2]) - >>> assert np.all(new.masks[0][0][0::2] == self.masks[0][0][0::2] + 4) # noqa: E501 - """ - assert fill_val is None or fill_val == 0, 'Here fill_val is not '\ - f'used, and defaultly should be None or 0. got {fill_val}.' - if len(self.masks) == 0: - translated_masks = PolygonMasks([], *out_shape) - else: - translated_masks = [] - for poly_per_obj in self.masks: - translated_poly_per_obj = [] - for p in poly_per_obj: - p = p.copy() - if direction == 'horizontal': - p[0::2] = np.clip(p[0::2] + offset, 0, out_shape[1]) - elif direction == 'vertical': - p[1::2] = np.clip(p[1::2] + offset, 0, out_shape[0]) - translated_poly_per_obj.append(p) - translated_masks.append(translated_poly_per_obj) - translated_masks = PolygonMasks(translated_masks, *out_shape) - return translated_masks - - def shear(self, - out_shape, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """See :func:`BaseInstanceMasks.shear`.""" - if len(self.masks) == 0: - sheared_masks = PolygonMasks([], *out_shape) - else: - sheared_masks = [] - if direction == 'horizontal': - shear_matrix = np.stack([[1, magnitude], - [0, 1]]).astype(np.float32) - elif direction == 'vertical': - shear_matrix = np.stack([[1, 0], [magnitude, - 1]]).astype(np.float32) - for poly_per_obj in self.masks: - sheared_poly = [] - for p in poly_per_obj: - p = np.stack([p[0::2], p[1::2]], axis=0) # [2, n] - new_coords = np.matmul(shear_matrix, p) # [2, n] - new_coords[0, :] = np.clip(new_coords[0, :], 0, - out_shape[1]) - new_coords[1, :] = np.clip(new_coords[1, :], 0, - out_shape[0]) - sheared_poly.append( - new_coords.transpose((1, 0)).reshape(-1)) - sheared_masks.append(sheared_poly) - sheared_masks = PolygonMasks(sheared_masks, *out_shape) - return sheared_masks - - def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0): - """See :func:`BaseInstanceMasks.rotate`.""" - if len(self.masks) == 0: - rotated_masks = PolygonMasks([], *out_shape) - else: - rotated_masks = [] - rotate_matrix = cv2.getRotationMatrix2D(center, -angle, scale) - for poly_per_obj in self.masks: - rotated_poly = [] - for p in poly_per_obj: - p = p.copy() - coords = np.stack([p[0::2], p[1::2]], axis=1) # [n, 2] - # pad 1 to convert from format [x, y] to homogeneous - # coordinates format [x, y, 1] - coords = np.concatenate( - (coords, np.ones((coords.shape[0], 1), coords.dtype)), - axis=1) # [n, 3] - rotated_coords = np.matmul( - rotate_matrix[None, :, :], - coords[:, :, None])[..., 0] # [n, 2, 1] -> [n, 2] - rotated_coords[:, 0] = np.clip(rotated_coords[:, 0], 0, - out_shape[1]) - rotated_coords[:, 1] = np.clip(rotated_coords[:, 1], 0, - out_shape[0]) - rotated_poly.append(rotated_coords.reshape(-1)) - rotated_masks.append(rotated_poly) - rotated_masks = PolygonMasks(rotated_masks, *out_shape) - return rotated_masks - - def to_bitmap(self): - """convert polygon masks to bitmap masks.""" - bitmap_masks = self.to_ndarray() - return BitmapMasks(bitmap_masks, self.height, self.width) - - @property - def areas(self): - """Compute areas of masks. - - This func is modified from `detectron2 - `_. - The function only works with Polygons using the shoelace formula. - - Return: - ndarray: areas of each instance - """ # noqa: W501 - area = [] - for polygons_per_obj in self.masks: - area_per_obj = 0 - for p in polygons_per_obj: - area_per_obj += self._polygon_area(p[0::2], p[1::2]) - area.append(area_per_obj) - return np.asarray(area) - - def _polygon_area(self, x, y): - """Compute the area of a component of a polygon. - - Using the shoelace formula: - https://stackoverflow.com/questions/24467972/calculate-area-of-polygon-given-x-y-coordinates - - Args: - x (ndarray): x coordinates of the component - y (ndarray): y coordinates of the component - - Return: - float: the are of the component - """ # noqa: 501 - return 0.5 * np.abs( - np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1))) - - def to_ndarray(self): - """Convert masks to the format of ndarray.""" - if len(self.masks) == 0: - return np.empty((0, self.height, self.width), dtype=np.uint8) - bitmap_masks = [] - for poly_per_obj in self.masks: - bitmap_masks.append( - polygon_to_bitmap(poly_per_obj, self.height, self.width)) - return np.stack(bitmap_masks) - - def to_tensor(self, dtype, device): - """See :func:`BaseInstanceMasks.to_tensor`.""" - if len(self.masks) == 0: - return torch.empty((0, self.height, self.width), - dtype=dtype, - device=device) - ndarray_masks = self.to_ndarray() - return torch.tensor(ndarray_masks, dtype=dtype, device=device) - - @classmethod - def random(cls, - num_masks=3, - height=32, - width=32, - n_verts=5, - dtype=np.float32, - rng=None): - """Generate random polygon masks for demo / testing purposes. - - Adapted from [1]_ - - References: - .. [1] https://gitlab.kitware.com/computer-vision/kwimage/-/blob/928cae35ca8/kwimage/structs/polygon.py#L379 # noqa: E501 - - Example: - >>> from mmdet.core.mask.structures import PolygonMasks - >>> self = PolygonMasks.random() - >>> print('self = {}'.format(self)) - """ - from mmdet.utils.util_random import ensure_rng - rng = ensure_rng(rng) - - def _gen_polygon(n, irregularity, spikeyness): - """Creates the polygon by sampling points on a circle around the - centre. Random noise is added by varying the angular spacing - between sequential points, and by varying the radial distance of - each point from the centre. - - Based on original code by Mike Ounsworth - - Args: - n (int): number of vertices - irregularity (float): [0,1] indicating how much variance there - is in the angular spacing of vertices. [0,1] will map to - [0, 2pi/numberOfVerts] - spikeyness (float): [0,1] indicating how much variance there is - in each vertex from the circle of radius aveRadius. [0,1] - will map to [0, aveRadius] - - Returns: - a list of vertices, in CCW order. - """ - from scipy.stats import truncnorm - # Generate around the unit circle - cx, cy = (0.0, 0.0) - radius = 1 - - tau = np.pi * 2 - - irregularity = np.clip(irregularity, 0, 1) * 2 * np.pi / n - spikeyness = np.clip(spikeyness, 1e-9, 1) - - # generate n angle steps - lower = (tau / n) - irregularity - upper = (tau / n) + irregularity - angle_steps = rng.uniform(lower, upper, n) - - # normalize the steps so that point 0 and point n+1 are the same - k = angle_steps.sum() / (2 * np.pi) - angles = (angle_steps / k).cumsum() + rng.uniform(0, tau) - - # Convert high and low values to be wrt the standard normal range - # https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.truncnorm.html - low = 0 - high = 2 * radius - mean = radius - std = spikeyness - a = (low - mean) / std - b = (high - mean) / std - tnorm = truncnorm(a=a, b=b, loc=mean, scale=std) - - # now generate the points - radii = tnorm.rvs(n, random_state=rng) - x_pts = cx + radii * np.cos(angles) - y_pts = cy + radii * np.sin(angles) - - points = np.hstack([x_pts[:, None], y_pts[:, None]]) - - # Scale to 0-1 space - points = points - points.min(axis=0) - points = points / points.max(axis=0) - - # Randomly place within 0-1 space - points = points * (rng.rand() * .8 + .2) - min_pt = points.min(axis=0) - max_pt = points.max(axis=0) - - high = (1 - max_pt) - low = (0 - min_pt) - offset = (rng.rand(2) * (high - low)) + low - points = points + offset - return points - - def _order_vertices(verts): - """ - References: - https://stackoverflow.com/questions/1709283/how-can-i-sort-a-coordinate-list-for-a-rectangle-counterclockwise - """ - mlat = verts.T[0].sum() / len(verts) - mlng = verts.T[1].sum() / len(verts) - - tau = np.pi * 2 - angle = (np.arctan2(mlat - verts.T[0], verts.T[1] - mlng) + - tau) % tau - sortx = angle.argsort() - verts = verts.take(sortx, axis=0) - return verts - - # Generate a random exterior for each requested mask - masks = [] - for _ in range(num_masks): - exterior = _order_vertices(_gen_polygon(n_verts, 0.9, 0.9)) - exterior = (exterior * [(width, height)]).astype(dtype) - masks.append([exterior.ravel()]) - - self = cls(masks, height, width) - return self - - -def polygon_to_bitmap(polygons, height, width): - """Convert masks from the form of polygons to bitmaps. - - Args: - polygons (list[ndarray]): masks in polygon representation - height (int): mask height - width (int): mask width - - Return: - ndarray: the converted masks in bitmap representation - """ - rles = maskUtils.frPyObjects(polygons, height, width) - rle = maskUtils.merge(rles) - bitmap_mask = maskUtils.decode(rle).astype(np.bool) - return bitmap_mask diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/assigners/base_assigner.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/assigners/base_assigner.py deleted file mode 100644 index 1ff0160dbb4bfbf53cb40d1d5cb29bcc3d197a59..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/assigners/base_assigner.py +++ /dev/null @@ -1,9 +0,0 @@ -from abc import ABCMeta, abstractmethod - - -class BaseAssigner(metaclass=ABCMeta): - """Base assigner that assigns boxes to ground truth boxes.""" - - @abstractmethod - def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None): - """Assign boxes to either a ground truth boxes or a negative boxes.""" diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/coder/delta_xywh_bbox_coder.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/coder/delta_xywh_bbox_coder.py deleted file mode 100644 index dc9a41e4464ac6332e26e7c21248f65d06a78af9..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/coder/delta_xywh_bbox_coder.py +++ /dev/null @@ -1,237 +0,0 @@ -import annotator.uniformer.mmcv as mmcv -import numpy as np -import torch - -from ..builder import BBOX_CODERS -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class DeltaXYWHBBoxCoder(BaseBBoxCoder): - """Delta XYWH BBox coder. - - Following the practice in `R-CNN `_, - this coder encodes bbox (x1, y1, x2, y2) into delta (dx, dy, dw, dh) and - decodes delta (dx, dy, dw, dh) back to original bbox (x1, y1, x2, y2). - - Args: - target_means (Sequence[float]): Denormalizing means of target for - delta coordinates - target_stds (Sequence[float]): Denormalizing standard deviation of - target for delta coordinates - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - """ - - def __init__(self, - target_means=(0., 0., 0., 0.), - target_stds=(1., 1., 1., 1.), - clip_border=True): - super(BaseBBoxCoder, self).__init__() - self.means = target_means - self.stds = target_stds - self.clip_border = clip_border - - def encode(self, bboxes, gt_bboxes): - """Get box regression transformation deltas that can be used to - transform the ``bboxes`` into the ``gt_bboxes``. - - Args: - bboxes (torch.Tensor): Source boxes, e.g., object proposals. - gt_bboxes (torch.Tensor): Target of the transformation, e.g., - ground-truth boxes. - - Returns: - torch.Tensor: Box transformation deltas - """ - - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - encoded_bboxes = bbox2delta(bboxes, gt_bboxes, self.means, self.stds) - return encoded_bboxes - - def decode(self, - bboxes, - pred_bboxes, - max_shape=None, - wh_ratio_clip=16 / 1000): - """Apply transformation `pred_bboxes` to `boxes`. - - Args: - bboxes (torch.Tensor): Basic boxes. Shape (B, N, 4) or (N, 4) - pred_bboxes (Tensor): Encoded offsets with respect to each roi. - Has shape (B, N, num_classes * 4) or (B, N, 4) or - (N, num_classes * 4) or (N, 4). Note N = num_anchors * W * H - when rois is a grid of anchors.Offset encoding follows [1]_. - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If bboxes shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - wh_ratio_clip (float, optional): The allowed ratio between - width and height. - - Returns: - torch.Tensor: Decoded boxes. - """ - - assert pred_bboxes.size(0) == bboxes.size(0) - if pred_bboxes.ndim == 3: - assert pred_bboxes.size(1) == bboxes.size(1) - decoded_bboxes = delta2bbox(bboxes, pred_bboxes, self.means, self.stds, - max_shape, wh_ratio_clip, self.clip_border) - - return decoded_bboxes - - -@mmcv.jit(coderize=True) -def bbox2delta(proposals, gt, means=(0., 0., 0., 0.), stds=(1., 1., 1., 1.)): - """Compute deltas of proposals w.r.t. gt. - - We usually compute the deltas of x, y, w, h of proposals w.r.t ground - truth bboxes to get regression target. - This is the inverse function of :func:`delta2bbox`. - - Args: - proposals (Tensor): Boxes to be transformed, shape (N, ..., 4) - gt (Tensor): Gt bboxes to be used as base, shape (N, ..., 4) - means (Sequence[float]): Denormalizing means for delta coordinates - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates - - Returns: - Tensor: deltas with shape (N, 4), where columns represent dx, dy, - dw, dh. - """ - assert proposals.size() == gt.size() - - proposals = proposals.float() - gt = gt.float() - px = (proposals[..., 0] + proposals[..., 2]) * 0.5 - py = (proposals[..., 1] + proposals[..., 3]) * 0.5 - pw = proposals[..., 2] - proposals[..., 0] - ph = proposals[..., 3] - proposals[..., 1] - - gx = (gt[..., 0] + gt[..., 2]) * 0.5 - gy = (gt[..., 1] + gt[..., 3]) * 0.5 - gw = gt[..., 2] - gt[..., 0] - gh = gt[..., 3] - gt[..., 1] - - dx = (gx - px) / pw - dy = (gy - py) / ph - dw = torch.log(gw / pw) - dh = torch.log(gh / ph) - deltas = torch.stack([dx, dy, dw, dh], dim=-1) - - means = deltas.new_tensor(means).unsqueeze(0) - stds = deltas.new_tensor(stds).unsqueeze(0) - deltas = deltas.sub_(means).div_(stds) - - return deltas - - -@mmcv.jit(coderize=True) -def delta2bbox(rois, - deltas, - means=(0., 0., 0., 0.), - stds=(1., 1., 1., 1.), - max_shape=None, - wh_ratio_clip=16 / 1000, - clip_border=True): - """Apply deltas to shift/scale base boxes. - - Typically the rois are anchor or proposed bounding boxes and the deltas are - network outputs used to shift/scale those boxes. - This is the inverse function of :func:`bbox2delta`. - - Args: - rois (Tensor): Boxes to be transformed. Has shape (N, 4) or (B, N, 4) - deltas (Tensor): Encoded offsets with respect to each roi. - Has shape (B, N, num_classes * 4) or (B, N, 4) or - (N, num_classes * 4) or (N, 4). Note N = num_anchors * W * H - when rois is a grid of anchors.Offset encoding follows [1]_. - means (Sequence[float]): Denormalizing means for delta coordinates - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If rois shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - wh_ratio_clip (float): Maximum aspect ratio for boxes. - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - - Returns: - Tensor: Boxes with shape (B, N, num_classes * 4) or (B, N, 4) or - (N, num_classes * 4) or (N, 4), where 4 represent - tl_x, tl_y, br_x, br_y. - - References: - .. [1] https://arxiv.org/abs/1311.2524 - - Example: - >>> rois = torch.Tensor([[ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 5., 5., 5., 5.]]) - >>> deltas = torch.Tensor([[ 0., 0., 0., 0.], - >>> [ 1., 1., 1., 1.], - >>> [ 0., 0., 2., -1.], - >>> [ 0.7, -1.9, -0.5, 0.3]]) - >>> delta2bbox(rois, deltas, max_shape=(32, 32, 3)) - tensor([[0.0000, 0.0000, 1.0000, 1.0000], - [0.1409, 0.1409, 2.8591, 2.8591], - [0.0000, 0.3161, 4.1945, 0.6839], - [5.0000, 5.0000, 5.0000, 5.0000]]) - """ - means = deltas.new_tensor(means).view(1, - -1).repeat(1, - deltas.size(-1) // 4) - stds = deltas.new_tensor(stds).view(1, -1).repeat(1, deltas.size(-1) // 4) - denorm_deltas = deltas * stds + means - dx = denorm_deltas[..., 0::4] - dy = denorm_deltas[..., 1::4] - dw = denorm_deltas[..., 2::4] - dh = denorm_deltas[..., 3::4] - max_ratio = np.abs(np.log(wh_ratio_clip)) - dw = dw.clamp(min=-max_ratio, max=max_ratio) - dh = dh.clamp(min=-max_ratio, max=max_ratio) - x1, y1 = rois[..., 0], rois[..., 1] - x2, y2 = rois[..., 2], rois[..., 3] - # Compute center of each roi - px = ((x1 + x2) * 0.5).unsqueeze(-1).expand_as(dx) - py = ((y1 + y2) * 0.5).unsqueeze(-1).expand_as(dy) - # Compute width/height of each roi - pw = (x2 - x1).unsqueeze(-1).expand_as(dw) - ph = (y2 - y1).unsqueeze(-1).expand_as(dh) - # Use exp(network energy) to enlarge/shrink each roi - gw = pw * dw.exp() - gh = ph * dh.exp() - # Use network energy to shift the center of each roi - gx = px + pw * dx - gy = py + ph * dy - # Convert center-xy/width/height to top-left, bottom-right - x1 = gx - gw * 0.5 - y1 = gy - gh * 0.5 - x2 = gx + gw * 0.5 - y2 = gy + gh * 0.5 - - bboxes = torch.stack([x1, y1, x2, y2], dim=-1).view(deltas.size()) - - if clip_border and max_shape is not None: - if not isinstance(max_shape, torch.Tensor): - max_shape = x1.new_tensor(max_shape) - max_shape = max_shape[..., :2].type_as(x1) - if max_shape.ndim == 2: - assert bboxes.ndim == 3 - assert max_shape.size(0) == bboxes.size(0) - - min_xy = x1.new_tensor(0) - max_xy = torch.cat( - [max_shape] * (deltas.size(-1) // 2), - dim=-1).flip(-1).unsqueeze(-2) - bboxes = torch.where(bboxes < min_xy, min_xy, bboxes) - bboxes = torch.where(bboxes > max_xy, max_xy, bboxes) - - return bboxes diff --git a/spaces/RuijiaTan/MultiPrincipalElementAlloyPropertyPredictor/Parser/main.py b/spaces/RuijiaTan/MultiPrincipalElementAlloyPropertyPredictor/Parser/main.py deleted file mode 100644 index bb0969ea37b452e361c6da3b422819672fc3f9b2..0000000000000000000000000000000000000000 --- a/spaces/RuijiaTan/MultiPrincipalElementAlloyPropertyPredictor/Parser/main.py +++ /dev/null @@ -1,146 +0,0 @@ -import csv -import os -import xlrd -import numpy as np -from sklearn.impute import SimpleImputer - -import element -import clear_data -import pandas as pd - -''' -The purpose of this file is to read the contents of the dataset, -normalize the elements, in the file of "parser_category". - -Output three different null values in three folders "drop_null", "fill_null", "interpolate". -''' - -category = ["compressive_strength","elongation","hardness","plasticity","tensile_strength","yield_strength"] - -def read_data(category): - csv_reader = csv.reader(open(category+".csv")) - total_row = sum(1 for line in open(category+".csv")) - - ## Build a new array whose elements are all 0. - result = np.zeros(((total_row, len(element.elements_list))), dtype=float) - count = 0 - for alloy in csv_reader: - ## interate every line(alloy) in the csv file. - alloy_ratio = clear_data.normalize_molar_ratios(clear_data.clean_row(str(alloy[0]))[1]) - alloy_dic = dict(zip(clear_data.clean_row(str(alloy[0]))[0], alloy_ratio)) - - ## Add the corresponding ratios at the proper location. - for key in alloy_dic.keys(): - result[count, element.elements_list.index(key)] = float(alloy_dic.get(key)) - count += 1 - - ## Save the result(array) as the 'Parser.csv' - err_csv = os.path.join(os.path.expanduser('.'), 'deploy', 'error.csv') - - with open("parser_result/parser_category/"+"Parser_element.csv", 'w') as f: - writer = csv.writer(f) - writer.writerow(element.elements_list) - count = 0 - for row in result: - writer.writerow(row) - count += 1 - -def get_mechnical(path,category): - ## For Mechnical Targets.csv - m_target = xlrd.open_workbook(path) - m_sheet = m_target.sheets()[0] - - # Get the target data of the machine learning model - hardness = m_sheet.col_values(4)[2:] - hardness.insert(0,"hardness") - yield_strength = m_sheet.col_values(5)[2:] - yield_strength.insert(0, "yield_strength") - tensile_strength = m_sheet.col_values(6)[2:] - tensile_strength.insert(0,"tensile_strength") - elongation = m_sheet.col_values(7)[2:] - elongation.insert(0,"elongation") - compressive_strength = m_sheet.col_values(8)[2:] - compressive_strength.insert(0,"compressive_strength") - plasticity = m_sheet.col_values(9)[2:] - plasticity.insert(0,"plasticity") - - # Save the mechanical properties of alloys. - with open("parser_result/Parser_element.csv") as csvFile: - rows = csv.reader(csvFile) - with open(("parser_result/parser_category/Parser_"+category+".csv"), 'w') as f: - writer = csv.writer(f) - index = 0 - for row in rows: - if category=="hardness": - row.append(hardness[index]) - elif category=="yield_strength": - row.append(yield_strength[index]) - elif category == "tensile_strength": - row.append(tensile_strength[index]) - elif category == "elongation": - row.append(elongation[index]) - elif category == "compressive_strength": - row.append(compressive_strength[index]) - elif category == "plasticity": - row.append(plasticity[index]) - writer.writerow(row) - index += 1 - data = pd.read_csv('parser_result/parser_category/Parser_'+category+'.csv') - - last_column = data.iloc[:, -1] - null_ratio = last_column.isnull().mean() - print("Null ratio in " + category +"dataset is: ", round(null_ratio,2)) - - # Replace null with 0s. - data_fillna = data.fillna(0) - df1 = pd.DataFrame(data=data_fillna) - df1.to_csv('parser_result/fill_null/'+category+'_fill_null.csv', index=False) - - # Delete null. - data_dropna = data.dropna(axis=0, how='any') - df1 = pd.DataFrame(data=data_dropna) - df1.to_csv('parser_result/drop_null/'+category+'_drop_null.csv', index=False) - - # # Split dataset to knn&rf model. - # data = data.fillna(0) - # df_test = data.drop(index=data.index) - # idx = 0 - # idx_exit = int(data.shape[0] * 0.07) - # for index, row in data.iterrows(): - # if row.astype(int)[-1] != 0 and idx <= idx_exit: - # df_test = df_test.append(row, ignore_index=True) - # data = data.drop([index]) - # idx += 1 - # df_test.to_csv('parser_result/RF_test/'+category+'_RF_test.csv', index=False) - # - # # Dealing with rfr_train, split it into knn_train and knn_test. - # df_train = pd.DataFrame(data=data) - # # Calculate the average number X of data(not 0). - # sum_num = 0 - # num = 0 - # for index, row in df_train.iterrows(): - # if row.astype(int)[-1] != 0: - # num += 1 - # sum_num += row.astype(int)[-1] - # mean_num = sum_num / num - # # df_0: which need to be imputed by KNN. - # df_0 = data.drop(index=data.index) - # df_pure = data.drop(index=data.index) - # for index, row in df_train.iterrows(): - # if row.astype(int)[-1] == 0: - # df_0 = df_0.append(row, ignore_index=True) - # else: - # df_pure = df_pure.append(row, ignore_index=True) - # df_0.to_csv('parser_result/KNN_test/'+category+'_KNN_test.csv', index=False) - # df_pure.to_csv('parser_result/KNN_train/' + category + '_KNN_train.csv', index=False) - - -if __name__ =="__main__": - read_data("mechanical_composition") - for c in category: - get_mechnical('mechanical.xls', c) - - - - - diff --git a/spaces/Saturdays/deepfake-detection/app.py b/spaces/Saturdays/deepfake-detection/app.py deleted file mode 100644 index 11dbdc16f719399c68191981e664b950d2de6221..0000000000000000000000000000000000000000 --- a/spaces/Saturdays/deepfake-detection/app.py +++ /dev/null @@ -1,102 +0,0 @@ -import gradio as gr -import torch -import torch.nn.functional as F -from facenet_pytorch import MTCNN, InceptionResnetV1 -import os -import numpy as np -from PIL import Image -import zipfile -import cv2 -from pytorch_grad_cam import GradCAM -from pytorch_grad_cam.utils.model_targets import ClassifierOutputTarget -from pytorch_grad_cam.utils.image import show_cam_on_image - -with zipfile.ZipFile("examples.zip","r") as zip_ref: - zip_ref.extractall(".") - -DEVICE = 'cuda:0' if torch.cuda.is_available() else 'cpu' - -mtcnn = MTCNN( - select_largest=False, - post_process=False, - device=DEVICE -).to(DEVICE).eval() - -model = InceptionResnetV1( - pretrained="vggface2", - classify=True, - num_classes=1, - device=DEVICE -) - -checkpoint = torch.load("resnetinceptionv1_epoch_32.pth", map_location=torch.device('cpu')) -model.load_state_dict(checkpoint['model_state_dict']) -model.to(DEVICE) -model.eval() - -EXAMPLES_FOLDER = 'examples' -examples_names = os.listdir(EXAMPLES_FOLDER) -examples = [] -for example_name in examples_names: - example_path = os.path.join(EXAMPLES_FOLDER, example_name) - label = example_name.split('_')[0] - example = { - 'path': example_path, - 'label': label - } - examples.append(example) -np.random.shuffle(examples) # shuffle - -def predict(input_image:Image.Image, true_label:str): - """Predict the label of the input_image""" - face = mtcnn(input_image) - if face is None: - raise Exception('No face detected') - face = face.unsqueeze(0) # add the batch dimension - face = F.interpolate(face, size=(256, 256), mode='bilinear', align_corners=False) - - # convert the face into a numpy array to be able to plot it - prev_face = face.squeeze(0).permute(1, 2, 0).cpu().detach().int().numpy() - prev_face = prev_face.astype('uint8') - - face = face.to(DEVICE) - face = face.to(torch.float32) - face = face / 255.0 - face_image_to_plot = face.squeeze(0).permute(1, 2, 0).cpu().detach().int().numpy() - - target_layers=[model.block8.branch1[-1]] - use_cuda = True if torch.cuda.is_available() else False - cam = GradCAM(model=model, target_layers=target_layers, use_cuda=use_cuda) - targets = [ClassifierOutputTarget(0)] - - grayscale_cam = cam(input_tensor=face, targets=targets, eigen_smooth=True) - grayscale_cam = grayscale_cam[0, :] - visualization = show_cam_on_image(face_image_to_plot, grayscale_cam, use_rgb=True) - face_with_mask = cv2.addWeighted(prev_face, 1, visualization, 0.5, 0) - - with torch.no_grad(): - output = torch.sigmoid(model(face).squeeze(0)) - prediction = "real" if output.item() < 0.5 else "fake" - - real_prediction = 1 - output.item() - fake_prediction = output.item() - - confidences = { - 'real': real_prediction, - 'fake': fake_prediction - } - return confidences, true_label, face_with_mask - -interface = gr.Interface( - fn=predict, - inputs=[ - gr.inputs.Image(label="Input Image", type="pil"), - "text" - ], - outputs=[ - gr.outputs.Label(label="Class"), - "text", - gr.outputs.Image(label="Face with Explainability") - ], - examples=[[examples[i]["path"], examples[i]["label"]] for i in range(10)] -).launch() \ No newline at end of file diff --git a/spaces/SeViLA/SeViLA/lavis/datasets/download_scripts/download_msvd.py b/spaces/SeViLA/SeViLA/lavis/datasets/download_scripts/download_msvd.py deleted file mode 100644 index c4bf5467f3af7acdde7f7a25a38d28c599525771..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/datasets/download_scripts/download_msvd.py +++ /dev/null @@ -1,67 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import os -from pathlib import Path - -from omegaconf import OmegaConf - -from lavis.common.utils import ( - cleanup_dir, - download_and_extract_archive, - get_abs_path, - get_cache_path, -) - - -DATA_URL = "https://www.cs.utexas.edu/users/ml/clamp/videoDescription/YouTubeClips.tar" - - -def download_datasets(root, url): - download_and_extract_archive(url=url, download_root=root) - - -def move_files(download_path, storage_path): - """ - Move files from download_path to storage_path - """ - print("Moving to {}".format(storage_path)) - - os.makedirs(storage_path, exist_ok=True) - - for file_name in os.listdir(download_path): - os.rename( - os.path.join(download_path, file_name), - os.path.join(storage_path, file_name), - ) - - -if __name__ == "__main__": - - config_path = get_abs_path("configs/datasets/msvd/defaults_cap.yaml") - - storage_dir = OmegaConf.load( - config_path - ).datasets.msvd_cap.build_info.videos.storage - - download_dir = Path(get_cache_path(storage_dir)).parent / "download" - storage_dir = Path(get_cache_path(storage_dir)) - - if storage_dir.exists(): - print(f"Dataset already exists at {storage_dir}. Aborting.") - exit(0) - - try: - print("Downloading {}".format(DATA_URL)) - download_datasets(download_dir, DATA_URL) - except Exception as e: - # remove download dir if failed - cleanup_dir(download_dir) - print("Failed to download or extracting datasets. Aborting.") - - move_files(download_dir / "YouTubeClips", storage_dir) - cleanup_dir(download_dir) diff --git a/spaces/SkKalit/KalitGenAiChatbot/README.md b/spaces/SkKalit/KalitGenAiChatbot/README.md deleted file mode 100644 index 4d60263216b8780722eca4ec51a7db9f0620ea3a..0000000000000000000000000000000000000000 --- a/spaces/SkKalit/KalitGenAiChatbot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: KalitGenAiChatbot -emoji: 🏃 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SoAp9035/mistral-7b-fast-chat/app.py b/spaces/SoAp9035/mistral-7b-fast-chat/app.py deleted file mode 100644 index b42bdb7593463284f5e1bb636b57b9b610e561e1..0000000000000000000000000000000000000000 --- a/spaces/SoAp9035/mistral-7b-fast-chat/app.py +++ /dev/null @@ -1,102 +0,0 @@ -from huggingface_hub import InferenceClient -import gradio as gr - -client = InferenceClient( - "mistralai/Mistral-7B-Instruct-v0.1" -) - - -def format_prompt(message, history): - prompt = "" - for user_prompt, bot_response in history: - prompt += f"[INST] {user_prompt} [/INST]" - prompt += f" {bot_response} " - prompt += f"[INST] {message} [/INST]" - return prompt - -def generate( - prompt, history, temperature=0.7, max_new_tokens=256, top_p=0.95, repetition_penalty=1.1, -): - temperature = float(temperature) - if temperature < 1e-2: - temperature = 1e-2 - top_p = float(top_p) - - generate_kwargs = dict( - temperature=temperature, - max_new_tokens=max_new_tokens, - top_p=top_p, - repetition_penalty=repetition_penalty, - do_sample=True, - seed=42, - ) - - formatted_prompt = format_prompt(prompt, history) - - stream = client.text_generation(formatted_prompt, **generate_kwargs, stream=True, details=True, return_full_text=False) - output = "" - - for response in stream: - output += response.token.text - yield output - return output - - -additional_inputs=[ - gr.Slider( - label="Temperature", - value=0.7, - minimum=0.0, - maximum=1.0, - step=0.05, - interactive=True, - info="Higher values produce more diverse outputs", - ), - gr.Slider( - label="Max new tokens", - value=256, - minimum=0, - maximum=1024, - step=64, - interactive=True, - info="The maximum numbers of new tokens", - ), - gr.Slider( - label="Top-p (nucleus sampling)", - value=0.95, - minimum=0.0, - maximum=1, - step=0.05, - interactive=True, - info="Higher values sample more low-probability tokens", - ), - gr.Slider( - label="Repetition penalty", - value=1.1, - minimum=1.0, - maximum=2.0, - step=0.05, - interactive=True, - info="Penalize repeated tokens", - ) -] - -css = """ - #mkd { - height: 500px; - overflow: auto; - border: 1px solid #ccc; - } -""" - -with gr.Blocks(css=css) as demo: - gr.HTML("

Mistral 7B Instruct

") - gr.HTML("

In this demo, you can chat with Mistral-7B-Instruct model. 💬

") - gr.HTML("

Learn more about the model here. 📚

") - gr.ChatInterface( - generate, - additional_inputs=additional_inputs, - examples=[["What is the secret to life?"], ["Write me a recipe for pancakes."]] - ) - -demo.queue().launch(debug=True) \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_path.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_path.py deleted file mode 100644 index 8a61d2f14da15939fe3024576a4f3c98e01f8b99..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_path.py +++ /dev/null @@ -1,509 +0,0 @@ -# encoding: utf-8 -"""Tests for IPython.utils.path.py""" - -# Copyright (c) IPython Development Team. -# Distributed under the terms of the Modified BSD License. - -import os -import shutil -import sys -import tempfile -import unittest -from contextlib import contextmanager -from importlib import reload -from os.path import abspath, join -from unittest.mock import patch - -import pytest -from tempfile import TemporaryDirectory - -import IPython -from IPython import paths -from IPython.testing import decorators as dec -from IPython.testing.decorators import ( - onlyif_unicode_paths, - skip_if_not_win32, - skip_win32, -) -from IPython.testing.tools import make_tempfile -from IPython.utils import path - -# Platform-dependent imports -try: - import winreg as wreg -except ImportError: - #Fake _winreg module on non-windows platforms - import types - wr_name = "winreg" - sys.modules[wr_name] = types.ModuleType(wr_name) - try: - import winreg as wreg - except ImportError: - import _winreg as wreg - - #Add entries that needs to be stubbed by the testing code - (wreg.OpenKey, wreg.QueryValueEx,) = (None, None) - -#----------------------------------------------------------------------------- -# Globals -#----------------------------------------------------------------------------- -env = os.environ -TMP_TEST_DIR = tempfile.mkdtemp() -HOME_TEST_DIR = join(TMP_TEST_DIR, "home_test_dir") -# -# Setup/teardown functions/decorators -# - -def setup_module(): - """Setup testenvironment for the module: - - - Adds dummy home dir tree - """ - # Do not mask exceptions here. In particular, catching WindowsError is a - # problem because that exception is only defined on Windows... - os.makedirs(os.path.join(HOME_TEST_DIR, 'ipython')) - - -def teardown_module(): - """Teardown testenvironment for the module: - - - Remove dummy home dir tree - """ - # Note: we remove the parent test dir, which is the root of all test - # subdirs we may have created. Use shutil instead of os.removedirs, so - # that non-empty directories are all recursively removed. - shutil.rmtree(TMP_TEST_DIR) - - -def setup_environment(): - """Setup testenvironment for some functions that are tested - in this module. In particular this functions stores attributes - and other things that we need to stub in some test functions. - This needs to be done on a function level and not module level because - each testfunction needs a pristine environment. - """ - global oldstuff, platformstuff - oldstuff = (env.copy(), os.name, sys.platform, path.get_home_dir, IPython.__file__, os.getcwd()) - -def teardown_environment(): - """Restore things that were remembered by the setup_environment function - """ - (oldenv, os.name, sys.platform, path.get_home_dir, IPython.__file__, old_wd) = oldstuff - os.chdir(old_wd) - reload(path) - - for key in list(env): - if key not in oldenv: - del env[key] - env.update(oldenv) - if hasattr(sys, 'frozen'): - del sys.frozen - - -# Build decorator that uses the setup_environment/setup_environment -@pytest.fixture -def environment(): - setup_environment() - yield - teardown_environment() - - -with_environment = pytest.mark.usefixtures("environment") - - -@skip_if_not_win32 -@with_environment -def test_get_home_dir_1(): - """Testcase for py2exe logic, un-compressed lib - """ - unfrozen = path.get_home_dir() - sys.frozen = True - - #fake filename for IPython.__init__ - IPython.__file__ = abspath(join(HOME_TEST_DIR, "Lib/IPython/__init__.py")) - - home_dir = path.get_home_dir() - assert home_dir == unfrozen - - -@skip_if_not_win32 -@with_environment -def test_get_home_dir_2(): - """Testcase for py2exe logic, compressed lib - """ - unfrozen = path.get_home_dir() - sys.frozen = True - #fake filename for IPython.__init__ - IPython.__file__ = abspath(join(HOME_TEST_DIR, "Library.zip/IPython/__init__.py")).lower() - - home_dir = path.get_home_dir(True) - assert home_dir == unfrozen - - -@skip_win32 -@with_environment -def test_get_home_dir_3(): - """get_home_dir() uses $HOME if set""" - env["HOME"] = HOME_TEST_DIR - home_dir = path.get_home_dir(True) - # get_home_dir expands symlinks - assert home_dir == os.path.realpath(env["HOME"]) - - -@with_environment -def test_get_home_dir_4(): - """get_home_dir() still works if $HOME is not set""" - - if 'HOME' in env: del env['HOME'] - # this should still succeed, but we don't care what the answer is - home = path.get_home_dir(False) - -@skip_win32 -@with_environment -def test_get_home_dir_5(): - """raise HomeDirError if $HOME is specified, but not a writable dir""" - env['HOME'] = abspath(HOME_TEST_DIR+'garbage') - # set os.name = posix, to prevent My Documents fallback on Windows - os.name = 'posix' - pytest.raises(path.HomeDirError, path.get_home_dir, True) - -# Should we stub wreg fully so we can run the test on all platforms? -@skip_if_not_win32 -@with_environment -def test_get_home_dir_8(): - """Using registry hack for 'My Documents', os=='nt' - - HOMESHARE, HOMEDRIVE, HOMEPATH, USERPROFILE and others are missing. - """ - os.name = 'nt' - # Remove from stub environment all keys that may be set - for key in ['HOME', 'HOMESHARE', 'HOMEDRIVE', 'HOMEPATH', 'USERPROFILE']: - env.pop(key, None) - - class key: - def __enter__(self): - pass - def Close(self): - pass - def __exit__(*args, **kwargs): - pass - - with patch.object(wreg, 'OpenKey', return_value=key()), \ - patch.object(wreg, 'QueryValueEx', return_value=[abspath(HOME_TEST_DIR)]): - home_dir = path.get_home_dir() - assert home_dir == abspath(HOME_TEST_DIR) - -@with_environment -def test_get_xdg_dir_0(): - """test_get_xdg_dir_0, check xdg_dir""" - reload(path) - path._writable_dir = lambda path: True - path.get_home_dir = lambda : 'somewhere' - os.name = "posix" - sys.platform = "linux2" - env.pop('IPYTHON_DIR', None) - env.pop('IPYTHONDIR', None) - env.pop('XDG_CONFIG_HOME', None) - - assert path.get_xdg_dir() == os.path.join("somewhere", ".config") - - -@with_environment -def test_get_xdg_dir_1(): - """test_get_xdg_dir_1, check nonexistent xdg_dir""" - reload(path) - path.get_home_dir = lambda : HOME_TEST_DIR - os.name = "posix" - sys.platform = "linux2" - env.pop('IPYTHON_DIR', None) - env.pop('IPYTHONDIR', None) - env.pop('XDG_CONFIG_HOME', None) - assert path.get_xdg_dir() is None - -@with_environment -def test_get_xdg_dir_2(): - """test_get_xdg_dir_2, check xdg_dir default to ~/.config""" - reload(path) - path.get_home_dir = lambda : HOME_TEST_DIR - os.name = "posix" - sys.platform = "linux2" - env.pop('IPYTHON_DIR', None) - env.pop('IPYTHONDIR', None) - env.pop('XDG_CONFIG_HOME', None) - cfgdir=os.path.join(path.get_home_dir(), '.config') - if not os.path.exists(cfgdir): - os.makedirs(cfgdir) - - assert path.get_xdg_dir() == cfgdir - -@with_environment -def test_get_xdg_dir_3(): - """test_get_xdg_dir_3, check xdg_dir not used on non-posix systems""" - reload(path) - path.get_home_dir = lambda : HOME_TEST_DIR - os.name = "nt" - sys.platform = "win32" - env.pop('IPYTHON_DIR', None) - env.pop('IPYTHONDIR', None) - env.pop('XDG_CONFIG_HOME', None) - cfgdir=os.path.join(path.get_home_dir(), '.config') - os.makedirs(cfgdir, exist_ok=True) - - assert path.get_xdg_dir() is None - -def test_filefind(): - """Various tests for filefind""" - f = tempfile.NamedTemporaryFile() - # print 'fname:',f.name - alt_dirs = paths.get_ipython_dir() - t = path.filefind(f.name, alt_dirs) - # print 'found:',t - - -@dec.skip_if_not_win32 -def test_get_long_path_name_win32(): - with TemporaryDirectory() as tmpdir: - - # Make a long path. Expands the path of tmpdir prematurely as it may already have a long - # path component, so ensure we include the long form of it - long_path = os.path.join(path.get_long_path_name(tmpdir), 'this is my long path name') - os.makedirs(long_path) - - # Test to see if the short path evaluates correctly. - short_path = os.path.join(tmpdir, 'THISIS~1') - evaluated_path = path.get_long_path_name(short_path) - assert evaluated_path.lower() == long_path.lower() - - -@dec.skip_win32 -def test_get_long_path_name(): - p = path.get_long_path_name("/usr/local") - assert p == "/usr/local" - - -class TestRaiseDeprecation(unittest.TestCase): - - @dec.skip_win32 # can't create not-user-writable dir on win - @with_environment - def test_not_writable_ipdir(self): - tmpdir = tempfile.mkdtemp() - os.name = "posix" - env.pop('IPYTHON_DIR', None) - env.pop('IPYTHONDIR', None) - env.pop('XDG_CONFIG_HOME', None) - env['HOME'] = tmpdir - ipdir = os.path.join(tmpdir, '.ipython') - os.mkdir(ipdir, 0o555) - try: - open(os.path.join(ipdir, "_foo_"), "w", encoding="utf-8").close() - except IOError: - pass - else: - # I can still write to an unwritable dir, - # assume I'm root and skip the test - pytest.skip("I can't create directories that I can't write to") - - with self.assertWarnsRegex(UserWarning, 'is not a writable location'): - ipdir = paths.get_ipython_dir() - env.pop('IPYTHON_DIR', None) - -@with_environment -def test_get_py_filename(): - os.chdir(TMP_TEST_DIR) - with make_tempfile("foo.py"): - assert path.get_py_filename("foo.py") == "foo.py" - assert path.get_py_filename("foo") == "foo.py" - with make_tempfile("foo"): - assert path.get_py_filename("foo") == "foo" - pytest.raises(IOError, path.get_py_filename, "foo.py") - pytest.raises(IOError, path.get_py_filename, "foo") - pytest.raises(IOError, path.get_py_filename, "foo.py") - true_fn = "foo with spaces.py" - with make_tempfile(true_fn): - assert path.get_py_filename("foo with spaces") == true_fn - assert path.get_py_filename("foo with spaces.py") == true_fn - pytest.raises(IOError, path.get_py_filename, '"foo with spaces.py"') - pytest.raises(IOError, path.get_py_filename, "'foo with spaces.py'") - -@onlyif_unicode_paths -def test_unicode_in_filename(): - """When a file doesn't exist, the exception raised should be safe to call - str() on - i.e. in Python 2 it must only have ASCII characters. - - https://github.com/ipython/ipython/issues/875 - """ - try: - # these calls should not throw unicode encode exceptions - path.get_py_filename('fooéè.py') - except IOError as ex: - str(ex) - - -class TestShellGlob(unittest.TestCase): - - @classmethod - def setUpClass(cls): - cls.filenames_start_with_a = ['a0', 'a1', 'a2'] - cls.filenames_end_with_b = ['0b', '1b', '2b'] - cls.filenames = cls.filenames_start_with_a + cls.filenames_end_with_b - cls.tempdir = TemporaryDirectory() - td = cls.tempdir.name - - with cls.in_tempdir(): - # Create empty files - for fname in cls.filenames: - open(os.path.join(td, fname), "w", encoding="utf-8").close() - - @classmethod - def tearDownClass(cls): - cls.tempdir.cleanup() - - @classmethod - @contextmanager - def in_tempdir(cls): - save = os.getcwd() - try: - os.chdir(cls.tempdir.name) - yield - finally: - os.chdir(save) - - def check_match(self, patterns, matches): - with self.in_tempdir(): - # glob returns unordered list. that's why sorted is required. - assert sorted(path.shellglob(patterns)) == sorted(matches) - - def common_cases(self): - return [ - (['*'], self.filenames), - (['a*'], self.filenames_start_with_a), - (['*c'], ['*c']), - (['*', 'a*', '*b', '*c'], self.filenames - + self.filenames_start_with_a - + self.filenames_end_with_b - + ['*c']), - (['a[012]'], self.filenames_start_with_a), - ] - - @skip_win32 - def test_match_posix(self): - for (patterns, matches) in self.common_cases() + [ - ([r'\*'], ['*']), - ([r'a\*', 'a*'], ['a*'] + self.filenames_start_with_a), - ([r'a\[012]'], ['a[012]']), - ]: - self.check_match(patterns, matches) - - @skip_if_not_win32 - def test_match_windows(self): - for (patterns, matches) in self.common_cases() + [ - # In windows, backslash is interpreted as path - # separator. Therefore, you can't escape glob - # using it. - ([r'a\*', 'a*'], [r'a\*'] + self.filenames_start_with_a), - ([r'a\[012]'], [r'a\[012]']), - ]: - self.check_match(patterns, matches) - - -@pytest.mark.parametrize( - "globstr, unescaped_globstr", - [ - (r"\*\[\!\]\?", "*[!]?"), - (r"\\*", r"\*"), - (r"\\\*", r"\*"), - (r"\\a", r"\a"), - (r"\a", r"\a"), - ], -) -def test_unescape_glob(globstr, unescaped_globstr): - assert path.unescape_glob(globstr) == unescaped_globstr - - -@onlyif_unicode_paths -def test_ensure_dir_exists(): - with TemporaryDirectory() as td: - d = os.path.join(td, '∂ir') - path.ensure_dir_exists(d) # create it - assert os.path.isdir(d) - path.ensure_dir_exists(d) # no-op - f = os.path.join(td, "ƒile") - open(f, "w", encoding="utf-8").close() # touch - with pytest.raises(IOError): - path.ensure_dir_exists(f) - -class TestLinkOrCopy(unittest.TestCase): - def setUp(self): - self.tempdir = TemporaryDirectory() - self.src = self.dst("src") - with open(self.src, "w", encoding="utf-8") as f: - f.write("Hello, world!") - - def tearDown(self): - self.tempdir.cleanup() - - def dst(self, *args): - return os.path.join(self.tempdir.name, *args) - - def assert_inode_not_equal(self, a, b): - assert ( - os.stat(a).st_ino != os.stat(b).st_ino - ), "%r and %r do reference the same indoes" % (a, b) - - def assert_inode_equal(self, a, b): - assert ( - os.stat(a).st_ino == os.stat(b).st_ino - ), "%r and %r do not reference the same indoes" % (a, b) - - def assert_content_equal(self, a, b): - with open(a, "rb") as a_f: - with open(b, "rb") as b_f: - assert a_f.read() == b_f.read() - - @skip_win32 - def test_link_successful(self): - dst = self.dst("target") - path.link_or_copy(self.src, dst) - self.assert_inode_equal(self.src, dst) - - @skip_win32 - def test_link_into_dir(self): - dst = self.dst("some_dir") - os.mkdir(dst) - path.link_or_copy(self.src, dst) - expected_dst = self.dst("some_dir", os.path.basename(self.src)) - self.assert_inode_equal(self.src, expected_dst) - - @skip_win32 - def test_target_exists(self): - dst = self.dst("target") - open(dst, "w", encoding="utf-8").close() - path.link_or_copy(self.src, dst) - self.assert_inode_equal(self.src, dst) - - @skip_win32 - def test_no_link(self): - real_link = os.link - try: - del os.link - dst = self.dst("target") - path.link_or_copy(self.src, dst) - self.assert_content_equal(self.src, dst) - self.assert_inode_not_equal(self.src, dst) - finally: - os.link = real_link - - @skip_if_not_win32 - def test_windows(self): - dst = self.dst("target") - path.link_or_copy(self.src, dst) - self.assert_content_equal(self.src, dst) - - def test_link_twice(self): - # Linking the same file twice shouldn't leave duplicates around. - # See https://github.com/ipython/ipython/issues/6450 - dst = self.dst('target') - path.link_or_copy(self.src, dst) - path.link_or_copy(self.src, dst) - self.assert_inode_equal(self.src, dst) - assert sorted(os.listdir(self.tempdir.name)) == ["src", "target"] diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/click/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/click/__init__.py deleted file mode 100644 index e3ef423b61f87b03d689ffc6d56fc30495a30228..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/click/__init__.py +++ /dev/null @@ -1,73 +0,0 @@ -""" -Click is a simple Python module inspired by the stdlib optparse to make -writing command line scripts fun. Unlike other modules, it's based -around a simple API that does not come with too much magic and is -composable. -""" -from .core import Argument as Argument -from .core import BaseCommand as BaseCommand -from .core import Command as Command -from .core import CommandCollection as CommandCollection -from .core import Context as Context -from .core import Group as Group -from .core import MultiCommand as MultiCommand -from .core import Option as Option -from .core import Parameter as Parameter -from .decorators import argument as argument -from .decorators import command as command -from .decorators import confirmation_option as confirmation_option -from .decorators import group as group -from .decorators import help_option as help_option -from .decorators import make_pass_decorator as make_pass_decorator -from .decorators import option as option -from .decorators import pass_context as pass_context -from .decorators import pass_obj as pass_obj -from .decorators import password_option as password_option -from .decorators import version_option as version_option -from .exceptions import Abort as Abort -from .exceptions import BadArgumentUsage as BadArgumentUsage -from .exceptions import BadOptionUsage as BadOptionUsage -from .exceptions import BadParameter as BadParameter -from .exceptions import ClickException as ClickException -from .exceptions import FileError as FileError -from .exceptions import MissingParameter as MissingParameter -from .exceptions import NoSuchOption as NoSuchOption -from .exceptions import UsageError as UsageError -from .formatting import HelpFormatter as HelpFormatter -from .formatting import wrap_text as wrap_text -from .globals import get_current_context as get_current_context -from .parser import OptionParser as OptionParser -from .termui import clear as clear -from .termui import confirm as confirm -from .termui import echo_via_pager as echo_via_pager -from .termui import edit as edit -from .termui import getchar as getchar -from .termui import launch as launch -from .termui import pause as pause -from .termui import progressbar as progressbar -from .termui import prompt as prompt -from .termui import secho as secho -from .termui import style as style -from .termui import unstyle as unstyle -from .types import BOOL as BOOL -from .types import Choice as Choice -from .types import DateTime as DateTime -from .types import File as File -from .types import FLOAT as FLOAT -from .types import FloatRange as FloatRange -from .types import INT as INT -from .types import IntRange as IntRange -from .types import ParamType as ParamType -from .types import Path as Path -from .types import STRING as STRING -from .types import Tuple as Tuple -from .types import UNPROCESSED as UNPROCESSED -from .types import UUID as UUID -from .utils import echo as echo -from .utils import format_filename as format_filename -from .utils import get_app_dir as get_app_dir -from .utils import get_binary_stream as get_binary_stream -from .utils import get_text_stream as get_text_stream -from .utils import open_file as open_file - -__version__ = "8.1.3" diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_plugins/jinja2_debug.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_plugins/jinja2_debug.py deleted file mode 100644 index a5e4a000ccd90c767a00e1d043c32bb14db4df3c..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_plugins/jinja2_debug.py +++ /dev/null @@ -1,506 +0,0 @@ -from _pydevd_bundle.pydevd_constants import STATE_SUSPEND, JINJA2_SUSPEND -from _pydevd_bundle.pydevd_comm import CMD_SET_BREAK, CMD_ADD_EXCEPTION_BREAK -from pydevd_file_utils import canonical_normalized_path -from _pydevd_bundle.pydevd_frame_utils import add_exception_to_frame, FCode -from _pydev_bundle import pydev_log -from pydevd_plugins.pydevd_line_validation import LineBreakpointWithLazyValidation, ValidationInfo -from _pydev_bundle.pydev_override import overrides -from _pydevd_bundle.pydevd_api import PyDevdAPI - - -class Jinja2LineBreakpoint(LineBreakpointWithLazyValidation): - - def __init__(self, canonical_normalized_filename, breakpoint_id, line, condition, func_name, expression, hit_condition=None, is_logpoint=False): - self.canonical_normalized_filename = canonical_normalized_filename - LineBreakpointWithLazyValidation.__init__(self, breakpoint_id, line, condition, func_name, expression, hit_condition=hit_condition, is_logpoint=is_logpoint) - - def __str__(self): - return "Jinja2LineBreakpoint: %s-%d" % (self.canonical_normalized_filename, self.line) - - -class _Jinja2ValidationInfo(ValidationInfo): - - @overrides(ValidationInfo._collect_valid_lines_in_template_uncached) - def _collect_valid_lines_in_template_uncached(self, template): - lineno_mapping = _get_frame_lineno_mapping(template) - if not lineno_mapping: - return set() - - return set(x[0] for x in lineno_mapping) - - -def add_line_breakpoint(plugin, pydb, type, canonical_normalized_filename, breakpoint_id, line, condition, expression, func_name, hit_condition=None, is_logpoint=False, add_breakpoint_result=None, on_changed_breakpoint_state=None): - if type == 'jinja2-line': - jinja2_line_breakpoint = Jinja2LineBreakpoint(canonical_normalized_filename, breakpoint_id, line, condition, func_name, expression, hit_condition=hit_condition, is_logpoint=is_logpoint) - if not hasattr(pydb, 'jinja2_breakpoints'): - _init_plugin_breaks(pydb) - - add_breakpoint_result.error_code = PyDevdAPI.ADD_BREAKPOINT_LAZY_VALIDATION - jinja2_line_breakpoint.add_breakpoint_result = add_breakpoint_result - jinja2_line_breakpoint.on_changed_breakpoint_state = on_changed_breakpoint_state - - return jinja2_line_breakpoint, pydb.jinja2_breakpoints - return None - - -def after_breakpoints_consolidated(plugin, py_db, canonical_normalized_filename, id_to_pybreakpoint, file_to_line_to_breakpoints): - jinja2_breakpoints_for_file = file_to_line_to_breakpoints.get(canonical_normalized_filename) - if not jinja2_breakpoints_for_file: - return - - if not hasattr(py_db, 'jinja2_validation_info'): - _init_plugin_breaks(py_db) - - # In general we validate the breakpoints only when the template is loaded, but if the template - # was already loaded, we can validate the breakpoints based on the last loaded value. - py_db.jinja2_validation_info.verify_breakpoints_from_template_cached_lines( - py_db, canonical_normalized_filename, jinja2_breakpoints_for_file) - - -def add_exception_breakpoint(plugin, pydb, type, exception): - if type == 'jinja2': - if not hasattr(pydb, 'jinja2_exception_break'): - _init_plugin_breaks(pydb) - pydb.jinja2_exception_break[exception] = True - return True - return False - - -def _init_plugin_breaks(pydb): - pydb.jinja2_exception_break = {} - pydb.jinja2_breakpoints = {} - - pydb.jinja2_validation_info = _Jinja2ValidationInfo() - - -def remove_all_exception_breakpoints(plugin, pydb): - if hasattr(pydb, 'jinja2_exception_break'): - pydb.jinja2_exception_break = {} - return True - return False - - -def remove_exception_breakpoint(plugin, pydb, type, exception): - if type == 'jinja2': - try: - del pydb.jinja2_exception_break[exception] - return True - except: - pass - return False - - -def get_breakpoints(plugin, pydb, type): - if type == 'jinja2-line': - return pydb.jinja2_breakpoints - return None - - -def _is_jinja2_render_call(frame): - try: - name = frame.f_code.co_name - if "__jinja_template__" in frame.f_globals and name in ("root", "loop", "macro") or name.startswith("block_"): - return True - return False - except: - pydev_log.exception() - return False - - -def _suspend_jinja2(pydb, thread, frame, cmd=CMD_SET_BREAK, message=None): - frame = Jinja2TemplateFrame(frame) - - if frame.f_lineno is None: - return None - - pydb.set_suspend(thread, cmd) - - thread.additional_info.suspend_type = JINJA2_SUSPEND - if cmd == CMD_ADD_EXCEPTION_BREAK: - # send exception name as message - if message: - message = str(message) - thread.additional_info.pydev_message = message - - return frame - - -def _is_jinja2_suspended(thread): - return thread.additional_info.suspend_type == JINJA2_SUSPEND - - -def _is_jinja2_context_call(frame): - return "_Context__obj" in frame.f_locals - - -def _is_jinja2_internal_function(frame): - return 'self' in frame.f_locals and frame.f_locals['self'].__class__.__name__ in \ - ('LoopContext', 'TemplateReference', 'Macro', 'BlockReference') - - -def _find_jinja2_render_frame(frame): - while frame is not None and not _is_jinja2_render_call(frame): - frame = frame.f_back - - return frame - -#======================================================================================================================= -# Jinja2 Frame -#======================================================================================================================= - - -class Jinja2TemplateFrame(object): - - IS_PLUGIN_FRAME = True - - def __init__(self, frame, original_filename=None, template_lineno=None): - - if original_filename is None: - original_filename = _get_jinja2_template_original_filename(frame) - - if template_lineno is None: - template_lineno = _get_jinja2_template_line(frame) - - self.back_context = None - if 'context' in frame.f_locals: - # sometimes we don't have 'context', e.g. in macros - self.back_context = frame.f_locals['context'] - self.f_code = FCode('template', original_filename) - self.f_lineno = template_lineno - self.f_back = frame - self.f_globals = {} - self.f_locals = self.collect_context(frame) - self.f_trace = None - - def _get_real_var_name(self, orig_name): - # replace leading number for local variables - parts = orig_name.split('_') - if len(parts) > 1 and parts[0].isdigit(): - return parts[1] - return orig_name - - def collect_context(self, frame): - res = {} - for k, v in frame.f_locals.items(): - if not k.startswith('l_'): - res[k] = v - elif v and not _is_missing(v): - res[self._get_real_var_name(k[2:])] = v - if self.back_context is not None: - for k, v in self.back_context.items(): - res[k] = v - return res - - def _change_variable(self, frame, name, value): - in_vars_or_parents = False - if 'context' in frame.f_locals: - if name in frame.f_locals['context'].parent: - self.back_context.parent[name] = value - in_vars_or_parents = True - if name in frame.f_locals['context'].vars: - self.back_context.vars[name] = value - in_vars_or_parents = True - - l_name = 'l_' + name - if l_name in frame.f_locals: - if in_vars_or_parents: - frame.f_locals[l_name] = self.back_context.resolve(name) - else: - frame.f_locals[l_name] = value - - -class Jinja2TemplateSyntaxErrorFrame(object): - - IS_PLUGIN_FRAME = True - - def __init__(self, frame, exception_cls_name, filename, lineno, f_locals): - self.f_code = FCode('Jinja2 %s' % (exception_cls_name,), filename) - self.f_lineno = lineno - self.f_back = frame - self.f_globals = {} - self.f_locals = f_locals - self.f_trace = None - - -def change_variable(plugin, frame, attr, expression): - if isinstance(frame, Jinja2TemplateFrame): - result = eval(expression, frame.f_globals, frame.f_locals) - frame._change_variable(frame.f_back, attr, result) - return result - return False - - -def _is_missing(item): - if item.__class__.__name__ == 'MissingType': - return True - return False - - -def _find_render_function_frame(frame): - # in order to hide internal rendering functions - old_frame = frame - try: - while not ('self' in frame.f_locals and frame.f_locals['self'].__class__.__name__ == 'Template' and \ - frame.f_code.co_name == 'render'): - frame = frame.f_back - if frame is None: - return old_frame - return frame - except: - return old_frame - - -def _get_jinja2_template_debug_info(frame): - frame_globals = frame.f_globals - - jinja_template = frame_globals.get('__jinja_template__') - - if jinja_template is None: - return None - - return _get_frame_lineno_mapping(jinja_template) - - -def _get_frame_lineno_mapping(jinja_template): - ''' - :rtype: list(tuple(int,int)) - :return: list((original_line, line_in_frame)) - ''' - # _debug_info is a string with the mapping from frame line to actual line - # i.e.: "5=13&8=14" - _debug_info = jinja_template._debug_info - if not _debug_info: - # Sometimes template contains only plain text. - return None - - # debug_info is a list with the mapping from frame line to actual line - # i.e.: [(5, 13), (8, 14)] - return jinja_template.debug_info - - -def _get_jinja2_template_line(frame): - debug_info = _get_jinja2_template_debug_info(frame) - if debug_info is None: - return None - - lineno = frame.f_lineno - - for pair in debug_info: - if pair[1] == lineno: - return pair[0] - - return None - - -def _convert_to_str(s): - return s - - -def _get_jinja2_template_original_filename(frame): - if '__jinja_template__' in frame.f_globals: - return _convert_to_str(frame.f_globals['__jinja_template__'].filename) - - return None - -#======================================================================================================================= -# Jinja2 Step Commands -#======================================================================================================================= - - -def has_exception_breaks(plugin): - if len(plugin.main_debugger.jinja2_exception_break) > 0: - return True - return False - - -def has_line_breaks(plugin): - for _canonical_normalized_filename, breakpoints in plugin.main_debugger.jinja2_breakpoints.items(): - if len(breakpoints) > 0: - return True - return False - - -def can_skip(plugin, pydb, frame): - if pydb.jinja2_breakpoints and _is_jinja2_render_call(frame): - filename = _get_jinja2_template_original_filename(frame) - if filename is not None: - canonical_normalized_filename = canonical_normalized_path(filename) - jinja2_breakpoints_for_file = pydb.jinja2_breakpoints.get(canonical_normalized_filename) - if jinja2_breakpoints_for_file: - return False - - if pydb.jinja2_exception_break: - name = frame.f_code.co_name - - # errors in compile time - if name in ('template', 'top-level template code', '') or name.startswith('block '): - f_back = frame.f_back - module_name = '' - if f_back is not None: - module_name = f_back.f_globals.get('__name__', '') - if module_name.startswith('jinja2.'): - return False - - return True - - -def cmd_step_into(plugin, pydb, frame, event, args, stop_info, stop): - info = args[2] - thread = args[3] - plugin_stop = False - stop_info['jinja2_stop'] = False - if _is_jinja2_suspended(thread): - stop_info['jinja2_stop'] = event in ('call', 'line') and _is_jinja2_render_call(frame) - plugin_stop = stop_info['jinja2_stop'] - stop = False - if info.pydev_call_from_jinja2 is not None: - if _is_jinja2_internal_function(frame): - # if internal Jinja2 function was called, we sould continue debugging inside template - info.pydev_call_from_jinja2 = None - else: - # we go into python code from Jinja2 rendering frame - stop = True - - if event == 'call' and _is_jinja2_context_call(frame.f_back): - # we called function from context, the next step will be in function - info.pydev_call_from_jinja2 = 1 - - if event == 'return' and _is_jinja2_context_call(frame.f_back): - # we return from python code to Jinja2 rendering frame - info.pydev_step_stop = info.pydev_call_from_jinja2 - info.pydev_call_from_jinja2 = None - thread.additional_info.suspend_type = JINJA2_SUSPEND - stop = False - - # print "info.pydev_call_from_jinja2", info.pydev_call_from_jinja2, "stop_info", stop_info, \ - # "thread.additional_info.suspend_type", thread.additional_info.suspend_type - # print "event", event, "farme.locals", frame.f_locals - return stop, plugin_stop - - -def cmd_step_over(plugin, pydb, frame, event, args, stop_info, stop): - info = args[2] - thread = args[3] - plugin_stop = False - stop_info['jinja2_stop'] = False - if _is_jinja2_suspended(thread): - stop = False - - if info.pydev_call_inside_jinja2 is None: - if _is_jinja2_render_call(frame): - if event == 'call': - info.pydev_call_inside_jinja2 = frame.f_back - if event in ('line', 'return'): - info.pydev_call_inside_jinja2 = frame - else: - if event == 'line': - if _is_jinja2_render_call(frame) and info.pydev_call_inside_jinja2 is frame: - stop_info['jinja2_stop'] = True - plugin_stop = stop_info['jinja2_stop'] - if event == 'return': - if frame is info.pydev_call_inside_jinja2 and 'event' not in frame.f_back.f_locals: - info.pydev_call_inside_jinja2 = _find_jinja2_render_frame(frame.f_back) - return stop, plugin_stop - else: - if event == 'return' and _is_jinja2_context_call(frame.f_back): - # we return from python code to Jinja2 rendering frame - info.pydev_call_from_jinja2 = None - info.pydev_call_inside_jinja2 = _find_jinja2_render_frame(frame) - thread.additional_info.suspend_type = JINJA2_SUSPEND - stop = False - return stop, plugin_stop - # print "info.pydev_call_from_jinja2", info.pydev_call_from_jinja2, "stop", stop, "jinja_stop", jinja2_stop, \ - # "thread.additional_info.suspend_type", thread.additional_info.suspend_type - # print "event", event, "info.pydev_call_inside_jinja2", info.pydev_call_inside_jinja2 - # print "frame", frame, "frame.f_back", frame.f_back, "step_stop", info.pydev_step_stop - # print "is_context_call", _is_jinja2_context_call(frame) - # print "render", _is_jinja2_render_call(frame) - # print "-------------" - return stop, plugin_stop - - -def stop(plugin, pydb, frame, event, args, stop_info, arg, step_cmd): - pydb = args[0] - thread = args[3] - if 'jinja2_stop' in stop_info and stop_info['jinja2_stop']: - frame = _suspend_jinja2(pydb, thread, frame, step_cmd) - if frame: - pydb.do_wait_suspend(thread, frame, event, arg) - return True - return False - - -def get_breakpoint(plugin, py_db, pydb_frame, frame, event, args): - py_db = args[0] - _filename = args[1] - info = args[2] - break_type = 'jinja2' - - if event == 'line' and info.pydev_state != STATE_SUSPEND and py_db.jinja2_breakpoints and _is_jinja2_render_call(frame): - - jinja_template = frame.f_globals.get('__jinja_template__') - if jinja_template is None: - return False, None, None, break_type - - original_filename = _get_jinja2_template_original_filename(frame) - if original_filename is not None: - pydev_log.debug("Jinja2 is rendering a template: %s", original_filename) - canonical_normalized_filename = canonical_normalized_path(original_filename) - jinja2_breakpoints_for_file = py_db.jinja2_breakpoints.get(canonical_normalized_filename) - - if jinja2_breakpoints_for_file: - - jinja2_validation_info = py_db.jinja2_validation_info - jinja2_validation_info.verify_breakpoints(py_db, canonical_normalized_filename, jinja2_breakpoints_for_file, jinja_template) - - template_lineno = _get_jinja2_template_line(frame) - if template_lineno is not None: - jinja2_breakpoint = jinja2_breakpoints_for_file.get(template_lineno) - if jinja2_breakpoint is not None: - new_frame = Jinja2TemplateFrame(frame, original_filename, template_lineno) - return True, jinja2_breakpoint, new_frame, break_type - - return False, None, None, break_type - - -def suspend(plugin, pydb, thread, frame, bp_type): - if bp_type == 'jinja2': - return _suspend_jinja2(pydb, thread, frame) - return None - - -def exception_break(plugin, pydb, pydb_frame, frame, args, arg): - pydb = args[0] - thread = args[3] - exception, value, trace = arg - if pydb.jinja2_exception_break and exception is not None: - exception_type = list(pydb.jinja2_exception_break.keys())[0] - if exception.__name__ in ('UndefinedError', 'TemplateNotFound', 'TemplatesNotFound'): - # errors in rendering - render_frame = _find_jinja2_render_frame(frame) - if render_frame: - suspend_frame = _suspend_jinja2(pydb, thread, render_frame, CMD_ADD_EXCEPTION_BREAK, message=exception_type) - if suspend_frame: - add_exception_to_frame(suspend_frame, (exception, value, trace)) - suspend_frame.f_back = frame - frame = suspend_frame - return True, frame - - elif exception.__name__ in ('TemplateSyntaxError', 'TemplateAssertionError'): - name = frame.f_code.co_name - - # errors in compile time - if name in ('template', 'top-level template code', '') or name.startswith('block '): - - f_back = frame.f_back - if f_back is not None: - module_name = f_back.f_globals.get('__name__', '') - - if module_name.startswith('jinja2.'): - # Jinja2 translates exception info and creates fake frame on his own - pydb_frame.set_suspend(thread, CMD_ADD_EXCEPTION_BREAK) - add_exception_to_frame(frame, (exception, value, trace)) - thread.additional_info.suspend_type = JINJA2_SUSPEND - thread.additional_info.pydev_message = str(exception_type) - return True, frame - return None diff --git a/spaces/Superlang/ImageProcessor/annotator/leres/pix2pix/options/__init__.py b/spaces/Superlang/ImageProcessor/annotator/leres/pix2pix/options/__init__.py deleted file mode 100644 index e7eedebe54aa70169fd25951b3034d819e396c90..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/leres/pix2pix/options/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""This package options includes option modules: training options, test options, and basic options (used in both training and test).""" diff --git a/spaces/Tahsin-Mayeesha/Bangla-Question-Generation/README.md b/spaces/Tahsin-Mayeesha/Bangla-Question-Generation/README.md deleted file mode 100644 index 2dd8786929de35f83b41df9de5f5fe620bb43f62..0000000000000000000000000000000000000000 --- a/spaces/Tahsin-Mayeesha/Bangla-Question-Generation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bangla Question Generation -emoji: 👀 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/sdist.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/sdist.py deleted file mode 100644 index ac489726caef968e7b8d82d44c171862e1af1182..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/sdist.py +++ /dev/null @@ -1,530 +0,0 @@ -"""distutils.command.sdist - -Implements the Distutils 'sdist' command (create a source distribution).""" - -import os -import sys -from glob import glob -from warnings import warn - -from ..core import Command -from distutils import dir_util -from distutils import file_util -from distutils import archive_util -from ..text_file import TextFile -from ..filelist import FileList -from distutils._log import log -from ..util import convert_path -from ..errors import DistutilsOptionError, DistutilsTemplateError - - -def show_formats(): - """Print all possible values for the 'formats' option (used by - the "--help-formats" command-line option). - """ - from ..fancy_getopt import FancyGetopt - from ..archive_util import ARCHIVE_FORMATS - - formats = [] - for format in ARCHIVE_FORMATS.keys(): - formats.append(("formats=" + format, None, ARCHIVE_FORMATS[format][2])) - formats.sort() - FancyGetopt(formats).print_help("List of available source distribution formats:") - - -class sdist(Command): - description = "create a source distribution (tarball, zip file, etc.)" - - def checking_metadata(self): - """Callable used for the check sub-command. - - Placed here so user_options can view it""" - return self.metadata_check - - user_options = [ - ('template=', 't', "name of manifest template file [default: MANIFEST.in]"), - ('manifest=', 'm', "name of manifest file [default: MANIFEST]"), - ( - 'use-defaults', - None, - "include the default file set in the manifest " - "[default; disable with --no-defaults]", - ), - ('no-defaults', None, "don't include the default file set"), - ( - 'prune', - None, - "specifically exclude files/directories that should not be " - "distributed (build tree, RCS/CVS dirs, etc.) " - "[default; disable with --no-prune]", - ), - ('no-prune', None, "don't automatically exclude anything"), - ( - 'manifest-only', - 'o', - "just regenerate the manifest and then stop " "(implies --force-manifest)", - ), - ( - 'force-manifest', - 'f', - "forcibly regenerate the manifest and carry on as usual. " - "Deprecated: now the manifest is always regenerated.", - ), - ('formats=', None, "formats for source distribution (comma-separated list)"), - ( - 'keep-temp', - 'k', - "keep the distribution tree around after creating " + "archive file(s)", - ), - ( - 'dist-dir=', - 'd', - "directory to put the source distribution archive(s) in " "[default: dist]", - ), - ( - 'metadata-check', - None, - "Ensure that all required elements of meta-data " - "are supplied. Warn if any missing. [default]", - ), - ( - 'owner=', - 'u', - "Owner name used when creating a tar file [default: current user]", - ), - ( - 'group=', - 'g', - "Group name used when creating a tar file [default: current group]", - ), - ] - - boolean_options = [ - 'use-defaults', - 'prune', - 'manifest-only', - 'force-manifest', - 'keep-temp', - 'metadata-check', - ] - - help_options = [ - ('help-formats', None, "list available distribution formats", show_formats), - ] - - negative_opt = {'no-defaults': 'use-defaults', 'no-prune': 'prune'} - - sub_commands = [('check', checking_metadata)] - - READMES = ('README', 'README.txt', 'README.rst') - - def initialize_options(self): - # 'template' and 'manifest' are, respectively, the names of - # the manifest template and manifest file. - self.template = None - self.manifest = None - - # 'use_defaults': if true, we will include the default file set - # in the manifest - self.use_defaults = 1 - self.prune = 1 - - self.manifest_only = 0 - self.force_manifest = 0 - - self.formats = ['gztar'] - self.keep_temp = 0 - self.dist_dir = None - - self.archive_files = None - self.metadata_check = 1 - self.owner = None - self.group = None - - def finalize_options(self): - if self.manifest is None: - self.manifest = "MANIFEST" - if self.template is None: - self.template = "MANIFEST.in" - - self.ensure_string_list('formats') - - bad_format = archive_util.check_archive_formats(self.formats) - if bad_format: - raise DistutilsOptionError("unknown archive format '%s'" % bad_format) - - if self.dist_dir is None: - self.dist_dir = "dist" - - def run(self): - # 'filelist' contains the list of files that will make up the - # manifest - self.filelist = FileList() - - # Run sub commands - for cmd_name in self.get_sub_commands(): - self.run_command(cmd_name) - - # Do whatever it takes to get the list of files to process - # (process the manifest template, read an existing manifest, - # whatever). File list is accumulated in 'self.filelist'. - self.get_file_list() - - # If user just wanted us to regenerate the manifest, stop now. - if self.manifest_only: - return - - # Otherwise, go ahead and create the source distribution tarball, - # or zipfile, or whatever. - self.make_distribution() - - def check_metadata(self): - """Deprecated API.""" - warn( - "distutils.command.sdist.check_metadata is deprecated, \ - use the check command instead", - PendingDeprecationWarning, - ) - check = self.distribution.get_command_obj('check') - check.ensure_finalized() - check.run() - - def get_file_list(self): - """Figure out the list of files to include in the source - distribution, and put it in 'self.filelist'. This might involve - reading the manifest template (and writing the manifest), or just - reading the manifest, or just using the default file set -- it all - depends on the user's options. - """ - # new behavior when using a template: - # the file list is recalculated every time because - # even if MANIFEST.in or setup.py are not changed - # the user might have added some files in the tree that - # need to be included. - # - # This makes --force the default and only behavior with templates. - template_exists = os.path.isfile(self.template) - if not template_exists and self._manifest_is_not_generated(): - self.read_manifest() - self.filelist.sort() - self.filelist.remove_duplicates() - return - - if not template_exists: - self.warn( - ("manifest template '%s' does not exist " + "(using default file list)") - % self.template - ) - self.filelist.findall() - - if self.use_defaults: - self.add_defaults() - - if template_exists: - self.read_template() - - if self.prune: - self.prune_file_list() - - self.filelist.sort() - self.filelist.remove_duplicates() - self.write_manifest() - - def add_defaults(self): - """Add all the default files to self.filelist: - - README or README.txt - - setup.py - - tests/test*.py and test/test*.py - - all pure Python modules mentioned in setup script - - all files pointed by package_data (build_py) - - all files defined in data_files. - - all files defined as scripts. - - all C sources listed as part of extensions or C libraries - in the setup script (doesn't catch C headers!) - Warns if (README or README.txt) or setup.py are missing; everything - else is optional. - """ - self._add_defaults_standards() - self._add_defaults_optional() - self._add_defaults_python() - self._add_defaults_data_files() - self._add_defaults_ext() - self._add_defaults_c_libs() - self._add_defaults_scripts() - - @staticmethod - def _cs_path_exists(fspath): - """ - Case-sensitive path existence check - - >>> sdist._cs_path_exists(__file__) - True - >>> sdist._cs_path_exists(__file__.upper()) - False - """ - if not os.path.exists(fspath): - return False - # make absolute so we always have a directory - abspath = os.path.abspath(fspath) - directory, filename = os.path.split(abspath) - return filename in os.listdir(directory) - - def _add_defaults_standards(self): - standards = [self.READMES, self.distribution.script_name] - for fn in standards: - if isinstance(fn, tuple): - alts = fn - got_it = False - for fn in alts: - if self._cs_path_exists(fn): - got_it = True - self.filelist.append(fn) - break - - if not got_it: - self.warn( - "standard file not found: should have one of " + ', '.join(alts) - ) - else: - if self._cs_path_exists(fn): - self.filelist.append(fn) - else: - self.warn("standard file '%s' not found" % fn) - - def _add_defaults_optional(self): - optional = ['tests/test*.py', 'test/test*.py', 'setup.cfg'] - for pattern in optional: - files = filter(os.path.isfile, glob(pattern)) - self.filelist.extend(files) - - def _add_defaults_python(self): - # build_py is used to get: - # - python modules - # - files defined in package_data - build_py = self.get_finalized_command('build_py') - - # getting python files - if self.distribution.has_pure_modules(): - self.filelist.extend(build_py.get_source_files()) - - # getting package_data files - # (computed in build_py.data_files by build_py.finalize_options) - for pkg, src_dir, build_dir, filenames in build_py.data_files: - for filename in filenames: - self.filelist.append(os.path.join(src_dir, filename)) - - def _add_defaults_data_files(self): - # getting distribution.data_files - if self.distribution.has_data_files(): - for item in self.distribution.data_files: - if isinstance(item, str): - # plain file - item = convert_path(item) - if os.path.isfile(item): - self.filelist.append(item) - else: - # a (dirname, filenames) tuple - dirname, filenames = item - for f in filenames: - f = convert_path(f) - if os.path.isfile(f): - self.filelist.append(f) - - def _add_defaults_ext(self): - if self.distribution.has_ext_modules(): - build_ext = self.get_finalized_command('build_ext') - self.filelist.extend(build_ext.get_source_files()) - - def _add_defaults_c_libs(self): - if self.distribution.has_c_libraries(): - build_clib = self.get_finalized_command('build_clib') - self.filelist.extend(build_clib.get_source_files()) - - def _add_defaults_scripts(self): - if self.distribution.has_scripts(): - build_scripts = self.get_finalized_command('build_scripts') - self.filelist.extend(build_scripts.get_source_files()) - - def read_template(self): - """Read and parse manifest template file named by self.template. - - (usually "MANIFEST.in") The parsing and processing is done by - 'self.filelist', which updates itself accordingly. - """ - log.info("reading manifest template '%s'", self.template) - template = TextFile( - self.template, - strip_comments=1, - skip_blanks=1, - join_lines=1, - lstrip_ws=1, - rstrip_ws=1, - collapse_join=1, - ) - - try: - while True: - line = template.readline() - if line is None: # end of file - break - - try: - self.filelist.process_template_line(line) - # the call above can raise a DistutilsTemplateError for - # malformed lines, or a ValueError from the lower-level - # convert_path function - except (DistutilsTemplateError, ValueError) as msg: - self.warn( - "%s, line %d: %s" - % (template.filename, template.current_line, msg) - ) - finally: - template.close() - - def prune_file_list(self): - """Prune off branches that might slip into the file list as created - by 'read_template()', but really don't belong there: - * the build tree (typically "build") - * the release tree itself (only an issue if we ran "sdist" - previously with --keep-temp, or it aborted) - * any RCS, CVS, .svn, .hg, .git, .bzr, _darcs directories - """ - build = self.get_finalized_command('build') - base_dir = self.distribution.get_fullname() - - self.filelist.exclude_pattern(None, prefix=build.build_base) - self.filelist.exclude_pattern(None, prefix=base_dir) - - if sys.platform == 'win32': - seps = r'/|\\' - else: - seps = '/' - - vcs_dirs = ['RCS', 'CVS', r'\.svn', r'\.hg', r'\.git', r'\.bzr', '_darcs'] - vcs_ptrn = r'(^|{})({})({}).*'.format(seps, '|'.join(vcs_dirs), seps) - self.filelist.exclude_pattern(vcs_ptrn, is_regex=1) - - def write_manifest(self): - """Write the file list in 'self.filelist' (presumably as filled in - by 'add_defaults()' and 'read_template()') to the manifest file - named by 'self.manifest'. - """ - if self._manifest_is_not_generated(): - log.info( - "not writing to manually maintained " - "manifest file '%s'" % self.manifest - ) - return - - content = self.filelist.files[:] - content.insert(0, '# file GENERATED by distutils, do NOT edit') - self.execute( - file_util.write_file, - (self.manifest, content), - "writing manifest file '%s'" % self.manifest, - ) - - def _manifest_is_not_generated(self): - # check for special comment used in 3.1.3 and higher - if not os.path.isfile(self.manifest): - return False - - fp = open(self.manifest) - try: - first_line = fp.readline() - finally: - fp.close() - return first_line != '# file GENERATED by distutils, do NOT edit\n' - - def read_manifest(self): - """Read the manifest file (named by 'self.manifest') and use it to - fill in 'self.filelist', the list of files to include in the source - distribution. - """ - log.info("reading manifest file '%s'", self.manifest) - with open(self.manifest) as manifest: - for line in manifest: - # ignore comments and blank lines - line = line.strip() - if line.startswith('#') or not line: - continue - self.filelist.append(line) - - def make_release_tree(self, base_dir, files): - """Create the directory tree that will become the source - distribution archive. All directories implied by the filenames in - 'files' are created under 'base_dir', and then we hard link or copy - (if hard linking is unavailable) those files into place. - Essentially, this duplicates the developer's source tree, but in a - directory named after the distribution, containing only the files - to be distributed. - """ - # Create all the directories under 'base_dir' necessary to - # put 'files' there; the 'mkpath()' is just so we don't die - # if the manifest happens to be empty. - self.mkpath(base_dir) - dir_util.create_tree(base_dir, files, dry_run=self.dry_run) - - # And walk over the list of files, either making a hard link (if - # os.link exists) to each one that doesn't already exist in its - # corresponding location under 'base_dir', or copying each file - # that's out-of-date in 'base_dir'. (Usually, all files will be - # out-of-date, because by default we blow away 'base_dir' when - # we're done making the distribution archives.) - - if hasattr(os, 'link'): # can make hard links on this system - link = 'hard' - msg = "making hard links in %s..." % base_dir - else: # nope, have to copy - link = None - msg = "copying files to %s..." % base_dir - - if not files: - log.warning("no files to distribute -- empty manifest?") - else: - log.info(msg) - for file in files: - if not os.path.isfile(file): - log.warning("'%s' not a regular file -- skipping", file) - else: - dest = os.path.join(base_dir, file) - self.copy_file(file, dest, link=link) - - self.distribution.metadata.write_pkg_info(base_dir) - - def make_distribution(self): - """Create the source distribution(s). First, we create the release - tree with 'make_release_tree()'; then, we create all required - archive files (according to 'self.formats') from the release tree. - Finally, we clean up by blowing away the release tree (unless - 'self.keep_temp' is true). The list of archive files created is - stored so it can be retrieved later by 'get_archive_files()'. - """ - # Don't warn about missing meta-data here -- should be (and is!) - # done elsewhere. - base_dir = self.distribution.get_fullname() - base_name = os.path.join(self.dist_dir, base_dir) - - self.make_release_tree(base_dir, self.filelist.files) - archive_files = [] # remember names of files we create - # tar archive must be created last to avoid overwrite and remove - if 'tar' in self.formats: - self.formats.append(self.formats.pop(self.formats.index('tar'))) - - for fmt in self.formats: - file = self.make_archive( - base_name, fmt, base_dir=base_dir, owner=self.owner, group=self.group - ) - archive_files.append(file) - self.distribution.dist_files.append(('sdist', '', file)) - - self.archive_files = archive_files - - if not self.keep_temp: - dir_util.remove_tree(base_dir, dry_run=self.dry_run) - - def get_archive_files(self): - """Return the list of archive files created when the command - was run, or None if the command hasn't run yet. - """ - return self.archive_files diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/cli/unpack.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/cli/unpack.py deleted file mode 100644 index d48840e6ec0512225233bf02d1d7ce203415b04c..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/cli/unpack.py +++ /dev/null @@ -1,30 +0,0 @@ -from __future__ import annotations - -from pathlib import Path - -from ..wheelfile import WheelFile - - -def unpack(path: str, dest: str = ".") -> None: - """Unpack a wheel. - - Wheel content will be unpacked to {dest}/{name}-{ver}, where {name} - is the package name and {ver} its version. - - :param path: The path to the wheel. - :param dest: Destination directory (default to current directory). - """ - with WheelFile(path) as wf: - namever = wf.parsed_filename.group("namever") - destination = Path(dest) / namever - print(f"Unpacking to: {destination}...", end="", flush=True) - for zinfo in wf.filelist: - wf.extract(zinfo, destination) - - # Set permissions to the same values as they were set in the archive - # We have to do this manually due to - # https://github.com/python/cpython/issues/59999 - permissions = zinfo.external_attr >> 16 & 0o777 - destination.joinpath(zinfo.filename).chmod(permissions) - - print("OK") diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/mask_ops.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/mask_ops.py deleted file mode 100644 index e7a9f3a323ddbe75845b668ee6b40c5385d206c3..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/mask_ops.py +++ /dev/null @@ -1,275 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -from typing import Tuple -import torch -from PIL import Image -from torch.nn import functional as F - -__all__ = ["paste_masks_in_image"] - - -BYTES_PER_FLOAT = 4 -# TODO: This memory limit may be too much or too little. It would be better to -# determine it based on available resources. -GPU_MEM_LIMIT = 1024 ** 3 # 1 GB memory limit - - -def _do_paste_mask(masks, boxes, img_h: int, img_w: int, skip_empty: bool = True): - """ - Args: - masks: N, 1, H, W - boxes: N, 4 - img_h, img_w (int): - skip_empty (bool): only paste masks within the region that - tightly bound all boxes, and returns the results this region only. - An important optimization for CPU. - - Returns: - if skip_empty == False, a mask of shape (N, img_h, img_w) - if skip_empty == True, a mask of shape (N, h', w'), and the slice - object for the corresponding region. - """ - # On GPU, paste all masks together (up to chunk size) - # by using the entire image to sample the masks - # Compared to pasting them one by one, - # this has more operations but is faster on COCO-scale dataset. - device = masks.device - - if skip_empty and not torch.jit.is_scripting(): - x0_int, y0_int = torch.clamp(boxes.min(dim=0).values.floor()[:2] - 1, min=0).to( - dtype=torch.int32 - ) - x1_int = torch.clamp(boxes[:, 2].max().ceil() + 1, max=img_w).to(dtype=torch.int32) - y1_int = torch.clamp(boxes[:, 3].max().ceil() + 1, max=img_h).to(dtype=torch.int32) - else: - x0_int, y0_int = 0, 0 - x1_int, y1_int = img_w, img_h - x0, y0, x1, y1 = torch.split(boxes, 1, dim=1) # each is Nx1 - - N = masks.shape[0] - - img_y = torch.arange(y0_int, y1_int, device=device, dtype=torch.float32) + 0.5 - img_x = torch.arange(x0_int, x1_int, device=device, dtype=torch.float32) + 0.5 - img_y = (img_y - y0) / (y1 - y0) * 2 - 1 - img_x = (img_x - x0) / (x1 - x0) * 2 - 1 - # img_x, img_y have shapes (N, w), (N, h) - - gx = img_x[:, None, :].expand(N, img_y.size(1), img_x.size(1)) - gy = img_y[:, :, None].expand(N, img_y.size(1), img_x.size(1)) - grid = torch.stack([gx, gy], dim=3) - - if not torch.jit.is_scripting(): - if not masks.dtype.is_floating_point: - masks = masks.float() - img_masks = F.grid_sample(masks, grid.to(masks.dtype), align_corners=False) - - if skip_empty and not torch.jit.is_scripting(): - return img_masks[:, 0], (slice(y0_int, y1_int), slice(x0_int, x1_int)) - else: - return img_masks[:, 0], () - - -# Annotate boxes as Tensor (but not Boxes) in order to use scripting -@torch.jit.script_if_tracing -def paste_masks_in_image( - masks: torch.Tensor, boxes: torch.Tensor, image_shape: Tuple[int, int], threshold: float = 0.5 -): - """ - Paste a set of masks that are of a fixed resolution (e.g., 28 x 28) into an image. - The location, height, and width for pasting each mask is determined by their - corresponding bounding boxes in boxes. - - Note: - This is a complicated but more accurate implementation. In actual deployment, it is - often enough to use a faster but less accurate implementation. - See :func:`paste_mask_in_image_old` in this file for an alternative implementation. - - Args: - masks (tensor): Tensor of shape (Bimg, Hmask, Wmask), where Bimg is the number of - detected object instances in the image and Hmask, Wmask are the mask width and mask - height of the predicted mask (e.g., Hmask = Wmask = 28). Values are in [0, 1]. - boxes (Boxes or Tensor): A Boxes of length Bimg or Tensor of shape (Bimg, 4). - boxes[i] and masks[i] correspond to the same object instance. - image_shape (tuple): height, width - threshold (float): A threshold in [0, 1] for converting the (soft) masks to - binary masks. - - Returns: - img_masks (Tensor): A tensor of shape (Bimg, Himage, Wimage), where Bimg is the - number of detected object instances and Himage, Wimage are the image width - and height. img_masks[i] is a binary mask for object instance i. - """ - - assert masks.shape[-1] == masks.shape[-2], "Only square mask predictions are supported" - N = len(masks) - if N == 0: - return masks.new_empty((0,) + image_shape, dtype=torch.uint8) - if not isinstance(boxes, torch.Tensor): - boxes = boxes.tensor - device = boxes.device - assert len(boxes) == N, boxes.shape - - img_h, img_w = image_shape - - # The actual implementation split the input into chunks, - # and paste them chunk by chunk. - if device.type == "cpu" or torch.jit.is_scripting(): - # CPU is most efficient when they are pasted one by one with skip_empty=True - # so that it performs minimal number of operations. - num_chunks = N - else: - # GPU benefits from parallelism for larger chunks, but may have memory issue - # int(img_h) because shape may be tensors in tracing - num_chunks = int(np.ceil(N * int(img_h) * int(img_w) * BYTES_PER_FLOAT / GPU_MEM_LIMIT)) - assert ( - num_chunks <= N - ), "Default GPU_MEM_LIMIT in mask_ops.py is too small; try increasing it" - chunks = torch.chunk(torch.arange(N, device=device), num_chunks) - - img_masks = torch.zeros( - N, img_h, img_w, device=device, dtype=torch.bool if threshold >= 0 else torch.uint8 - ) - for inds in chunks: - masks_chunk, spatial_inds = _do_paste_mask( - masks[inds, None, :, :], boxes[inds], img_h, img_w, skip_empty=device.type == "cpu" - ) - - if threshold >= 0: - masks_chunk = (masks_chunk >= threshold).to(dtype=torch.bool) - else: - # for visualization and debugging - masks_chunk = (masks_chunk * 255).to(dtype=torch.uint8) - - if torch.jit.is_scripting(): # Scripting does not use the optimized codepath - img_masks[inds] = masks_chunk - else: - img_masks[(inds,) + spatial_inds] = masks_chunk - return img_masks - - -# The below are the original paste function (from Detectron1) which has -# larger quantization error. -# It is faster on CPU, while the aligned one is faster on GPU thanks to grid_sample. - - -def paste_mask_in_image_old(mask, box, img_h, img_w, threshold): - """ - Paste a single mask in an image. - This is a per-box implementation of :func:`paste_masks_in_image`. - This function has larger quantization error due to incorrect pixel - modeling and is not used any more. - - Args: - mask (Tensor): A tensor of shape (Hmask, Wmask) storing the mask of a single - object instance. Values are in [0, 1]. - box (Tensor): A tensor of shape (4, ) storing the x0, y0, x1, y1 box corners - of the object instance. - img_h, img_w (int): Image height and width. - threshold (float): Mask binarization threshold in [0, 1]. - - Returns: - im_mask (Tensor): - The resized and binarized object mask pasted into the original - image plane (a tensor of shape (img_h, img_w)). - """ - # Conversion from continuous box coordinates to discrete pixel coordinates - # via truncation (cast to int32). This determines which pixels to paste the - # mask onto. - box = box.to(dtype=torch.int32) # Continuous to discrete coordinate conversion - # An example (1D) box with continuous coordinates (x0=0.7, x1=4.3) will map to - # a discrete coordinates (x0=0, x1=4). Note that box is mapped to 5 = x1 - x0 + 1 - # pixels (not x1 - x0 pixels). - samples_w = box[2] - box[0] + 1 # Number of pixel samples, *not* geometric width - samples_h = box[3] - box[1] + 1 # Number of pixel samples, *not* geometric height - - # Resample the mask from it's original grid to the new samples_w x samples_h grid - mask = Image.fromarray(mask.cpu().numpy()) - mask = mask.resize((samples_w, samples_h), resample=Image.BILINEAR) - mask = np.array(mask, copy=False) - - if threshold >= 0: - mask = np.array(mask > threshold, dtype=np.uint8) - mask = torch.from_numpy(mask) - else: - # for visualization and debugging, we also - # allow it to return an unmodified mask - mask = torch.from_numpy(mask * 255).to(torch.uint8) - - im_mask = torch.zeros((img_h, img_w), dtype=torch.uint8) - x_0 = max(box[0], 0) - x_1 = min(box[2] + 1, img_w) - y_0 = max(box[1], 0) - y_1 = min(box[3] + 1, img_h) - - im_mask[y_0:y_1, x_0:x_1] = mask[ - (y_0 - box[1]) : (y_1 - box[1]), (x_0 - box[0]) : (x_1 - box[0]) - ] - return im_mask - - -# Our pixel modeling requires extrapolation for any continuous -# coordinate < 0.5 or > length - 0.5. When sampling pixels on the masks, -# we would like this extrapolation to be an interpolation between boundary values and zero, -# instead of using absolute zero or boundary values. -# Therefore `paste_mask_in_image_old` is often used with zero padding around the masks like this: -# masks, scale = pad_masks(masks[:, 0, :, :], 1) -# boxes = scale_boxes(boxes.tensor, scale) - - -def pad_masks(masks, padding): - """ - Args: - masks (tensor): A tensor of shape (B, M, M) representing B masks. - padding (int): Number of cells to pad on all sides. - - Returns: - The padded masks and the scale factor of the padding size / original size. - """ - B = masks.shape[0] - M = masks.shape[-1] - pad2 = 2 * padding - scale = float(M + pad2) / M - padded_masks = masks.new_zeros((B, M + pad2, M + pad2)) - padded_masks[:, padding:-padding, padding:-padding] = masks - return padded_masks, scale - - -def scale_boxes(boxes, scale): - """ - Args: - boxes (tensor): A tensor of shape (B, 4) representing B boxes with 4 - coords representing the corners x0, y0, x1, y1, - scale (float): The box scaling factor. - - Returns: - Scaled boxes. - """ - w_half = (boxes[:, 2] - boxes[:, 0]) * 0.5 - h_half = (boxes[:, 3] - boxes[:, 1]) * 0.5 - x_c = (boxes[:, 2] + boxes[:, 0]) * 0.5 - y_c = (boxes[:, 3] + boxes[:, 1]) * 0.5 - - w_half *= scale - h_half *= scale - - scaled_boxes = torch.zeros_like(boxes) - scaled_boxes[:, 0] = x_c - w_half - scaled_boxes[:, 2] = x_c + w_half - scaled_boxes[:, 1] = y_c - h_half - scaled_boxes[:, 3] = y_c + h_half - return scaled_boxes - - -@torch.jit.script_if_tracing -def _paste_masks_tensor_shape( - masks: torch.Tensor, - boxes: torch.Tensor, - image_shape: Tuple[torch.Tensor, torch.Tensor], - threshold: float = 0.5, -): - """ - A wrapper of paste_masks_in_image where image_shape is Tensor. - During tracing, shapes might be tensors instead of ints. The Tensor->int - conversion should be scripted rather than traced. - """ - return paste_masks_in_image(masks, boxes, (int(image_shape[0]), int(image_shape[1])), threshold) diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/resnet.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/resnet.py deleted file mode 100644 index 5b8e842c585a81b5345ade4ca1da62a4904a122a..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/resnet.py +++ /dev/null @@ -1,694 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -import fvcore.nn.weight_init as weight_init -import torch -import torch.nn.functional as F -from torch import nn - -from detectron2.layers import ( - CNNBlockBase, - Conv2d, - DeformConv, - ModulatedDeformConv, - ShapeSpec, - get_norm, -) - -from .backbone import Backbone -from .build import BACKBONE_REGISTRY - -__all__ = [ - "ResNetBlockBase", - "BasicBlock", - "BottleneckBlock", - "DeformBottleneckBlock", - "BasicStem", - "ResNet", - "make_stage", - "build_resnet_backbone", -] - - -class BasicBlock(CNNBlockBase): - """ - The basic residual block for ResNet-18 and ResNet-34 defined in :paper:`ResNet`, - with two 3x3 conv layers and a projection shortcut if needed. - """ - - def __init__(self, in_channels, out_channels, *, stride=1, norm="BN"): - """ - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - stride (int): Stride for the first conv. - norm (str or callable): normalization for all conv layers. - See :func:`layers.get_norm` for supported format. - """ - super().__init__(in_channels, out_channels, stride) - - if in_channels != out_channels: - self.shortcut = Conv2d( - in_channels, - out_channels, - kernel_size=1, - stride=stride, - bias=False, - norm=get_norm(norm, out_channels), - ) - else: - self.shortcut = None - - self.conv1 = Conv2d( - in_channels, - out_channels, - kernel_size=3, - stride=stride, - padding=1, - bias=False, - norm=get_norm(norm, out_channels), - ) - - self.conv2 = Conv2d( - out_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False, - norm=get_norm(norm, out_channels), - ) - - for layer in [self.conv1, self.conv2, self.shortcut]: - if layer is not None: # shortcut can be None - weight_init.c2_msra_fill(layer) - - def forward(self, x): - out = self.conv1(x) - out = F.relu_(out) - out = self.conv2(out) - - if self.shortcut is not None: - shortcut = self.shortcut(x) - else: - shortcut = x - - out += shortcut - out = F.relu_(out) - return out - - -class BottleneckBlock(CNNBlockBase): - """ - The standard bottleneck residual block used by ResNet-50, 101 and 152 - defined in :paper:`ResNet`. It contains 3 conv layers with kernels - 1x1, 3x3, 1x1, and a projection shortcut if needed. - """ - - def __init__( - self, - in_channels, - out_channels, - *, - bottleneck_channels, - stride=1, - num_groups=1, - norm="BN", - stride_in_1x1=False, - dilation=1, - ): - """ - Args: - bottleneck_channels (int): number of output channels for the 3x3 - "bottleneck" conv layers. - num_groups (int): number of groups for the 3x3 conv layer. - norm (str or callable): normalization for all conv layers. - See :func:`layers.get_norm` for supported format. - stride_in_1x1 (bool): when stride>1, whether to put stride in the - first 1x1 convolution or the bottleneck 3x3 convolution. - dilation (int): the dilation rate of the 3x3 conv layer. - """ - super().__init__(in_channels, out_channels, stride) - - if in_channels != out_channels: - self.shortcut = Conv2d( - in_channels, - out_channels, - kernel_size=1, - stride=stride, - bias=False, - norm=get_norm(norm, out_channels), - ) - else: - self.shortcut = None - - # The original MSRA ResNet models have stride in the first 1x1 conv - # The subsequent fb.torch.resnet and Caffe2 ResNe[X]t implementations have - # stride in the 3x3 conv - stride_1x1, stride_3x3 = (stride, 1) if stride_in_1x1 else (1, stride) - - self.conv1 = Conv2d( - in_channels, - bottleneck_channels, - kernel_size=1, - stride=stride_1x1, - bias=False, - norm=get_norm(norm, bottleneck_channels), - ) - - self.conv2 = Conv2d( - bottleneck_channels, - bottleneck_channels, - kernel_size=3, - stride=stride_3x3, - padding=1 * dilation, - bias=False, - groups=num_groups, - dilation=dilation, - norm=get_norm(norm, bottleneck_channels), - ) - - self.conv3 = Conv2d( - bottleneck_channels, - out_channels, - kernel_size=1, - bias=False, - norm=get_norm(norm, out_channels), - ) - - for layer in [self.conv1, self.conv2, self.conv3, self.shortcut]: - if layer is not None: # shortcut can be None - weight_init.c2_msra_fill(layer) - - # Zero-initialize the last normalization in each residual branch, - # so that at the beginning, the residual branch starts with zeros, - # and each residual block behaves like an identity. - # See Sec 5.1 in "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour": - # "For BN layers, the learnable scaling coefficient γ is initialized - # to be 1, except for each residual block's last BN - # where γ is initialized to be 0." - - # nn.init.constant_(self.conv3.norm.weight, 0) - # TODO this somehow hurts performance when training GN models from scratch. - # Add it as an option when we need to use this code to train a backbone. - - def forward(self, x): - out = self.conv1(x) - out = F.relu_(out) - - out = self.conv2(out) - out = F.relu_(out) - - out = self.conv3(out) - - if self.shortcut is not None: - shortcut = self.shortcut(x) - else: - shortcut = x - - out += shortcut - out = F.relu_(out) - return out - - -class DeformBottleneckBlock(CNNBlockBase): - """ - Similar to :class:`BottleneckBlock`, but with :paper:`deformable conv ` - in the 3x3 convolution. - """ - - def __init__( - self, - in_channels, - out_channels, - *, - bottleneck_channels, - stride=1, - num_groups=1, - norm="BN", - stride_in_1x1=False, - dilation=1, - deform_modulated=False, - deform_num_groups=1, - ): - super().__init__(in_channels, out_channels, stride) - self.deform_modulated = deform_modulated - - if in_channels != out_channels: - self.shortcut = Conv2d( - in_channels, - out_channels, - kernel_size=1, - stride=stride, - bias=False, - norm=get_norm(norm, out_channels), - ) - else: - self.shortcut = None - - stride_1x1, stride_3x3 = (stride, 1) if stride_in_1x1 else (1, stride) - - self.conv1 = Conv2d( - in_channels, - bottleneck_channels, - kernel_size=1, - stride=stride_1x1, - bias=False, - norm=get_norm(norm, bottleneck_channels), - ) - - if deform_modulated: - deform_conv_op = ModulatedDeformConv - # offset channels are 2 or 3 (if with modulated) * kernel_size * kernel_size - offset_channels = 27 - else: - deform_conv_op = DeformConv - offset_channels = 18 - - self.conv2_offset = Conv2d( - bottleneck_channels, - offset_channels * deform_num_groups, - kernel_size=3, - stride=stride_3x3, - padding=1 * dilation, - dilation=dilation, - ) - self.conv2 = deform_conv_op( - bottleneck_channels, - bottleneck_channels, - kernel_size=3, - stride=stride_3x3, - padding=1 * dilation, - bias=False, - groups=num_groups, - dilation=dilation, - deformable_groups=deform_num_groups, - norm=get_norm(norm, bottleneck_channels), - ) - - self.conv3 = Conv2d( - bottleneck_channels, - out_channels, - kernel_size=1, - bias=False, - norm=get_norm(norm, out_channels), - ) - - for layer in [self.conv1, self.conv2, self.conv3, self.shortcut]: - if layer is not None: # shortcut can be None - weight_init.c2_msra_fill(layer) - - nn.init.constant_(self.conv2_offset.weight, 0) - nn.init.constant_(self.conv2_offset.bias, 0) - - def forward(self, x): - out = self.conv1(x) - out = F.relu_(out) - - if self.deform_modulated: - offset_mask = self.conv2_offset(out) - offset_x, offset_y, mask = torch.chunk(offset_mask, 3, dim=1) - offset = torch.cat((offset_x, offset_y), dim=1) - mask = mask.sigmoid() - out = self.conv2(out, offset, mask) - else: - offset = self.conv2_offset(out) - out = self.conv2(out, offset) - out = F.relu_(out) - - out = self.conv3(out) - - if self.shortcut is not None: - shortcut = self.shortcut(x) - else: - shortcut = x - - out += shortcut - out = F.relu_(out) - return out - - -class BasicStem(CNNBlockBase): - """ - The standard ResNet stem (layers before the first residual block), - with a conv, relu and max_pool. - """ - - def __init__(self, in_channels=3, out_channels=64, norm="BN"): - """ - Args: - norm (str or callable): norm after the first conv layer. - See :func:`layers.get_norm` for supported format. - """ - super().__init__(in_channels, out_channels, 4) - self.in_channels = in_channels - self.conv1 = Conv2d( - in_channels, - out_channels, - kernel_size=7, - stride=2, - padding=3, - bias=False, - norm=get_norm(norm, out_channels), - ) - weight_init.c2_msra_fill(self.conv1) - - def forward(self, x): - x = self.conv1(x) - x = F.relu_(x) - x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1) - return x - - -class ResNet(Backbone): - """ - Implement :paper:`ResNet`. - """ - - def __init__(self, stem, stages, num_classes=None, out_features=None, freeze_at=0): - """ - Args: - stem (nn.Module): a stem module - stages (list[list[CNNBlockBase]]): several (typically 4) stages, - each contains multiple :class:`CNNBlockBase`. - num_classes (None or int): if None, will not perform classification. - Otherwise, will create a linear layer. - out_features (list[str]): name of the layers whose outputs should - be returned in forward. Can be anything in "stem", "linear", or "res2" ... - If None, will return the output of the last layer. - freeze_at (int): The number of stages at the beginning to freeze. - see :meth:`freeze` for detailed explanation. - """ - super().__init__() - self.stem = stem - self.num_classes = num_classes - - current_stride = self.stem.stride - self._out_feature_strides = {"stem": current_stride} - self._out_feature_channels = {"stem": self.stem.out_channels} - - self.stage_names, self.stages = [], [] - - if out_features is not None: - # Avoid keeping unused layers in this module. They consume extra memory - # and may cause allreduce to fail - num_stages = max( - [{"res2": 1, "res3": 2, "res4": 3, "res5": 4}.get(f, 0) for f in out_features] - ) - stages = stages[:num_stages] - for i, blocks in enumerate(stages): - assert len(blocks) > 0, len(blocks) - for block in blocks: - assert isinstance(block, CNNBlockBase), block - - name = "res" + str(i + 2) - stage = nn.Sequential(*blocks) - - self.add_module(name, stage) - self.stage_names.append(name) - self.stages.append(stage) - - self._out_feature_strides[name] = current_stride = int( - current_stride * np.prod([k.stride for k in blocks]) - ) - self._out_feature_channels[name] = curr_channels = blocks[-1].out_channels - self.stage_names = tuple(self.stage_names) # Make it static for scripting - - if num_classes is not None: - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - self.linear = nn.Linear(curr_channels, num_classes) - - # Sec 5.1 in "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour": - # "The 1000-way fully-connected layer is initialized by - # drawing weights from a zero-mean Gaussian with standard deviation of 0.01." - nn.init.normal_(self.linear.weight, std=0.01) - name = "linear" - - if out_features is None: - out_features = [name] - self._out_features = out_features - assert len(self._out_features) - children = [x[0] for x in self.named_children()] - for out_feature in self._out_features: - assert out_feature in children, "Available children: {}".format(", ".join(children)) - self.freeze(freeze_at) - - def forward(self, x): - """ - Args: - x: Tensor of shape (N,C,H,W). H, W must be a multiple of ``self.size_divisibility``. - - Returns: - dict[str->Tensor]: names and the corresponding features - """ - assert x.dim() == 4, f"ResNet takes an input of shape (N, C, H, W). Got {x.shape} instead!" - outputs = {} - x = self.stem(x) - if "stem" in self._out_features: - outputs["stem"] = x - for name, stage in zip(self.stage_names, self.stages): - x = stage(x) - if name in self._out_features: - outputs[name] = x - if self.num_classes is not None: - x = self.avgpool(x) - x = torch.flatten(x, 1) - x = self.linear(x) - if "linear" in self._out_features: - outputs["linear"] = x - return outputs - - def output_shape(self): - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] - ) - for name in self._out_features - } - - def freeze(self, freeze_at=0): - """ - Freeze the first several stages of the ResNet. Commonly used in - fine-tuning. - - Layers that produce the same feature map spatial size are defined as one - "stage" by :paper:`FPN`. - - Args: - freeze_at (int): number of stages to freeze. - `1` means freezing the stem. `2` means freezing the stem and - one residual stage, etc. - - Returns: - nn.Module: this ResNet itself - """ - if freeze_at >= 1: - self.stem.freeze() - for idx, stage in enumerate(self.stages, start=2): - if freeze_at >= idx: - for block in stage.children(): - block.freeze() - return self - - @staticmethod - def make_stage(block_class, num_blocks, *, in_channels, out_channels, **kwargs): - """ - Create a list of blocks of the same type that forms one ResNet stage. - - Args: - block_class (type): a subclass of CNNBlockBase that's used to create all blocks in this - stage. A module of this type must not change spatial resolution of inputs unless its - stride != 1. - num_blocks (int): number of blocks in this stage - in_channels (int): input channels of the entire stage. - out_channels (int): output channels of **every block** in the stage. - kwargs: other arguments passed to the constructor of - `block_class`. If the argument name is "xx_per_block", the - argument is a list of values to be passed to each block in the - stage. Otherwise, the same argument is passed to every block - in the stage. - - Returns: - list[CNNBlockBase]: a list of block module. - - Examples: - :: - stage = ResNet.make_stage( - BottleneckBlock, 3, in_channels=16, out_channels=64, - bottleneck_channels=16, num_groups=1, - stride_per_block=[2, 1, 1], - dilations_per_block=[1, 1, 2] - ) - - Usually, layers that produce the same feature map spatial size are defined as one - "stage" (in :paper:`FPN`). Under such definition, ``stride_per_block[1:]`` should - all be 1. - """ - blocks = [] - for i in range(num_blocks): - curr_kwargs = {} - for k, v in kwargs.items(): - if k.endswith("_per_block"): - assert len(v) == num_blocks, ( - f"Argument '{k}' of make_stage should have the " - f"same length as num_blocks={num_blocks}." - ) - newk = k[: -len("_per_block")] - assert newk not in kwargs, f"Cannot call make_stage with both {k} and {newk}!" - curr_kwargs[newk] = v[i] - else: - curr_kwargs[k] = v - - blocks.append( - block_class(in_channels=in_channels, out_channels=out_channels, **curr_kwargs) - ) - in_channels = out_channels - return blocks - - @staticmethod - def make_default_stages(depth, block_class=None, **kwargs): - """ - Created list of ResNet stages from pre-defined depth (one of 18, 34, 50, 101, 152). - If it doesn't create the ResNet variant you need, please use :meth:`make_stage` - instead for fine-grained customization. - - Args: - depth (int): depth of ResNet - block_class (type): the CNN block class. Has to accept - `bottleneck_channels` argument for depth > 50. - By default it is BasicBlock or BottleneckBlock, based on the - depth. - kwargs: - other arguments to pass to `make_stage`. Should not contain - stride and channels, as they are predefined for each depth. - - Returns: - list[list[CNNBlockBase]]: modules in all stages; see arguments of - :class:`ResNet.__init__`. - """ - num_blocks_per_stage = { - 18: [2, 2, 2, 2], - 34: [3, 4, 6, 3], - 50: [3, 4, 6, 3], - 101: [3, 4, 23, 3], - 152: [3, 8, 36, 3], - }[depth] - if block_class is None: - block_class = BasicBlock if depth < 50 else BottleneckBlock - if depth < 50: - in_channels = [64, 64, 128, 256] - out_channels = [64, 128, 256, 512] - else: - in_channels = [64, 256, 512, 1024] - out_channels = [256, 512, 1024, 2048] - ret = [] - for (n, s, i, o) in zip(num_blocks_per_stage, [1, 2, 2, 2], in_channels, out_channels): - if depth >= 50: - kwargs["bottleneck_channels"] = o // 4 - ret.append( - ResNet.make_stage( - block_class=block_class, - num_blocks=n, - stride_per_block=[s] + [1] * (n - 1), - in_channels=i, - out_channels=o, - **kwargs, - ) - ) - return ret - - -ResNetBlockBase = CNNBlockBase -""" -Alias for backward compatibiltiy. -""" - - -def make_stage(*args, **kwargs): - """ - Deprecated alias for backward compatibiltiy. - """ - return ResNet.make_stage(*args, **kwargs) - - -@BACKBONE_REGISTRY.register() -def build_resnet_backbone(cfg, input_shape): - """ - Create a ResNet instance from config. - - Returns: - ResNet: a :class:`ResNet` instance. - """ - # need registration of new blocks/stems? - norm = cfg.MODEL.RESNETS.NORM - stem = BasicStem( - in_channels=input_shape.channels, - out_channels=cfg.MODEL.RESNETS.STEM_OUT_CHANNELS, - norm=norm, - ) - - # fmt: off - freeze_at = cfg.MODEL.BACKBONE.FREEZE_AT - out_features = cfg.MODEL.RESNETS.OUT_FEATURES - depth = cfg.MODEL.RESNETS.DEPTH - num_groups = cfg.MODEL.RESNETS.NUM_GROUPS - width_per_group = cfg.MODEL.RESNETS.WIDTH_PER_GROUP - bottleneck_channels = num_groups * width_per_group - in_channels = cfg.MODEL.RESNETS.STEM_OUT_CHANNELS - out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS - stride_in_1x1 = cfg.MODEL.RESNETS.STRIDE_IN_1X1 - res5_dilation = cfg.MODEL.RESNETS.RES5_DILATION - deform_on_per_stage = cfg.MODEL.RESNETS.DEFORM_ON_PER_STAGE - deform_modulated = cfg.MODEL.RESNETS.DEFORM_MODULATED - deform_num_groups = cfg.MODEL.RESNETS.DEFORM_NUM_GROUPS - # fmt: on - assert res5_dilation in {1, 2}, "res5_dilation cannot be {}.".format(res5_dilation) - - num_blocks_per_stage = { - 18: [2, 2, 2, 2], - 34: [3, 4, 6, 3], - 50: [3, 4, 6, 3], - 101: [3, 4, 23, 3], - 152: [3, 8, 36, 3], - }[depth] - - if depth in [18, 34]: - assert out_channels == 64, "Must set MODEL.RESNETS.RES2_OUT_CHANNELS = 64 for R18/R34" - assert not any( - deform_on_per_stage - ), "MODEL.RESNETS.DEFORM_ON_PER_STAGE unsupported for R18/R34" - assert res5_dilation == 1, "Must set MODEL.RESNETS.RES5_DILATION = 1 for R18/R34" - assert num_groups == 1, "Must set MODEL.RESNETS.NUM_GROUPS = 1 for R18/R34" - - stages = [] - - for idx, stage_idx in enumerate(range(2, 6)): - # res5_dilation is used this way as a convention in R-FCN & Deformable Conv paper - dilation = res5_dilation if stage_idx == 5 else 1 - first_stride = 1 if idx == 0 or (stage_idx == 5 and dilation == 2) else 2 - stage_kargs = { - "num_blocks": num_blocks_per_stage[idx], - "stride_per_block": [first_stride] + [1] * (num_blocks_per_stage[idx] - 1), - "in_channels": in_channels, - "out_channels": out_channels, - "norm": norm, - } - # Use BasicBlock for R18 and R34. - if depth in [18, 34]: - stage_kargs["block_class"] = BasicBlock - else: - stage_kargs["bottleneck_channels"] = bottleneck_channels - stage_kargs["stride_in_1x1"] = stride_in_1x1 - stage_kargs["dilation"] = dilation - stage_kargs["num_groups"] = num_groups - if deform_on_per_stage[idx]: - stage_kargs["block_class"] = DeformBottleneckBlock - stage_kargs["deform_modulated"] = deform_modulated - stage_kargs["deform_num_groups"] = deform_num_groups - else: - stage_kargs["block_class"] = BottleneckBlock - blocks = ResNet.make_stage(**stage_kargs) - in_channels = out_channels - out_channels *= 2 - bottleneck_channels *= 2 - stages.append(blocks) - return ResNet(stem, stages, out_features=out_features, freeze_at=freeze_at) diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/build.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/build.py deleted file mode 100644 index 3427215746c9a146bd902f22ea9b26d121c36b27..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/build.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import torch - -from detectron2.utils.logger import _log_api_usage -from detectron2.utils.registry import Registry - -META_ARCH_REGISTRY = Registry("META_ARCH") # noqa F401 isort:skip -META_ARCH_REGISTRY.__doc__ = """ -Registry for meta-architectures, i.e. the whole model. - -The registered object will be called with `obj(cfg)` -and expected to return a `nn.Module` object. -""" - - -def build_model(cfg): - """ - Build the whole model architecture, defined by ``cfg.MODEL.META_ARCHITECTURE``. - Note that it does not load any weights from ``cfg``. - """ - meta_arch = cfg.MODEL.META_ARCHITECTURE - model = META_ARCH_REGISTRY.get(meta_arch)(cfg) - model.to(torch.device(cfg.MODEL.DEVICE)) - _log_api_usage("modeling.meta_arch." + meta_arch) - return model diff --git a/spaces/TuringAgency/anic_gui/Hintergrund/So funktioniert Anic.md b/spaces/TuringAgency/anic_gui/Hintergrund/So funktioniert Anic.md deleted file mode 100644 index e8eb2bbbab62b63746ab4fda7e99c330c502cdb0..0000000000000000000000000000000000000000 --- a/spaces/TuringAgency/anic_gui/Hintergrund/So funktioniert Anic.md +++ /dev/null @@ -1,68 +0,0 @@ -# Kolumne einer künstlichen Intelligenz: Wie schreibt eine Robo-Autor*in? - -Die taz veröffentlicht die erste Kolumne einer nicht-menschlichen Autor*in. Wie funktioniert das? Die wichtigsten Fragen zur künstlichen Intelligenz. -(Dieser Text steht auch auf der Website der Taz: https://taz.de/Kolumne-einer-kuenstlichen-Intelligenz/!5898282/) - -Die taz hat mit Anic T. Wae die erste Kolumnist:in die kein Mensch ist, sondern eine sogenannte künstliche Intelligenz. Hier erklären wir genauer wie das funktioniert und wie die taz mit der Robo-Kolumnist*in umgeht. - -## Wer ist Anic T. Wae? - -Den Namen Anic T. Wae haben wir der fiktiven Persona gegeben, die monatliche die taz-Kolumne Intelligenzbestie schreibt. Anic generiert die Texte mit einem Machine-Learning-System. - -Anic (none/they) ist, soweit wir wissen, die erste Kolumnist:in in einer deutschsprachigen Zeitung, die kein Mensch ist. Anic beschreibt sich selbst als „übergroße, leuchtend grüne Schachtel mit einem einzigen, riesigen Auge in der Mitte“. E-Mails erreichen die KI-Kolumnist*in an anic@taz.de. - -## Wie funktioniert das? - -Die Texte werden mit sogenannten Transformern erstellt. Das sind Computer-Modelle, die anhand von großen Textmengen lernen können, zu schreiben. Das Bekannteste heißt GPT-3 (kurz für Generative Pre-trained Transformer 3) und kann hier ausprobiert werden. Auf ihm beruht zum Beispiel auch der Bot ChatGTP. Bei der Auswahl des richtigen Modells für unsere Kolumnentexte beachten wir verschiedene Faktoren wie zum Beispiel Größe, Kosten, Qualität der Texte auf Deutsch und Energieverbrauch. - -Da wir wollen, dass Anic sich über die Zeit als Kolumnist:in weiterentwickelt, werden wir das System auch immer wieder verändern, mit dem die Texte generiert werden. Die aktuellen Programme und Prozesse, nach denen die Kolumnentexte entstehen, veröffentlichen wir laufend auf dieser Seite. - -Dort kannst du Anics jeweils aktuelle Version auch selbst ausprobieren. - -## Wer wählt aus, worüber Anic schreibt? - -Anic braucht einen Startpunkt, um loszuschreiben, quasi einen Themenvorschlag, und der kommt von Menschen. Wir schreiben einen sogenannten Prompt, der dem KI-System eine Richtung vorgibt, in die der generierte Text sich entfalten soll. Wie bei menschlichen Ko­lum­nis­t*in­nen auch könnte die Vorgabe einen Themenvorschlag enthalten, z.B. „Stelle dich vor.“ oder „Schreibe einen witzigen Text über Weihnachten.“ Aber ab und zu wollen wir es auch Anic überlassen, sich ein Thema auszusuchen, dann ist der Prompt allgemeiner. Wie genau die Prompts aussehen, die zu den veröffentlichten Texten geführt haben, könnt ihr hier sehen. - -Manchmal schreibt Anic nur einen Textanfang. Wir ermutigen Anic dann, den Text weiterzuschreben, in dem wir alles bisher geschriebene wieder als Prompt oben reingeben. - -## Werden die Texte von der Redaktion verändert? - -Bei normalen Kolumnentexten nimmt die Redaktion üblicherweise kleine Änderungen vor. Zum Beispiel werden die Texte gekürzt, Schreibfehler korrigiert oder Formulierungen verbessert. Wir wollen Anics Texte nicht verfälschen und veröffentlichen sie deswegen, wie sie bei uns ankommen. Die taz bessert ausschließlich Fehler wie doppelte Leerzeichen im Sinne der Lesbarkeit aus, aber oft ist die Grenze zwischen Tippfehler und stilistischer Eigenheit bei Anic fließend. Wir veröffentlichen die Texte in unserem Hugging Face Repo auch unverändert, so dass der Abgleich jederzeit möglich ist. - -## Druckt ihr alles was kommt, auch wenn es zum Beispiel sexistische oder rassistische Sprache enthält? - -Nicht jeder Text einfach so in die Zeitung. Das Kuratorium hinter Anic trifft eine Vorauswahl. Wir wählen die besten Texte anhand von Eigenschaften wie zum Beispiel Unterhaltungswert, Lesefluss, Fantasie, Humor, Tiefgang, überraschende Kohärenz oder überraschende Unsinnigkeit. Manchmal müssen wir sehr oft auf Anics Knöpfe drücken, bevor ein Text in der richtigen Länge und Qualität herauskommt. - -Die Technologie hinter Anic kann auch Texte hervorbringen, die faktische Fehler enthalten oder sogar beleidigende oder sonstwie schädigende Sprache. Kleine erkennbare Unrichtigkeiten können interessant sein, aber Texte, die Menschen oder Gruppen schaden könnten, veröffentlichen wir nicht. - -Genau wie bei menschlichen Autor:innen, suchen wir bei Meinungsverschiedenheiten den Dialog mit der Maschine – wir können die Parameter nachjustieren oder schriftliches Feedback geben. Auch Le­se­r*in­nen­brie­fe sind willkommen. - -## Arbeitet eine Robo-Kolumnistin umsonst? - -Unterschiedliche Firmen verlangen gerade unterschiedlich viel für das Generieren von Texten mit ihren Modellen, deshalb probieren wir verschiedene aus. Bei OpenAI, der Firma hinter GPT-3, kostet ein Text von 3.000 Zeichen zwischen 0,0016 und 0,48 USD, je nachdem, wie gut und schnell das verwendete Modell ist. - -Anic benötigt auch Energie. Am meisten Energie wird beim Training eines großen Sprach-Modells verbraucht – das Training des Modells entspricht sozusagen den Herstellungskosten. Wie bei einem Auto auch, braucht man einmal Energie, um das Ding zu fabrizieren, und dann kleinere Mengen im laufenden Betrieb. - -Das Training von GPT-NeoX-20B, einem bekannten Open-Source-Modell, hat laut diesem Paper beispielsweise ca. 66 MWh Energie gekostet und fast 35 Tonnen CO2-Ausstoss verursacht. Mit dem einmal trainierten Modell einen Kolumnentext herzustellen ist hingegen nach unseren Berechnungen vergleichbar mit einer kurzen Autofahrt mit einem mittelalten Benziner. - -Wir bitten zu beachten, dass auch menschliche Schreiberlinge lange trainieren müssen, bevor sie Kolumnen schreiben, und dass auch sie dabei Emissionen verursachen. - -Genauere Informationen zum technischen Hintergrund des aktuellen Kolumnentexts veröffentlichen wir hier: https://huggingface.co/spaces/TuringAgency/anic_gui/discussions/1 - -## Wer wird dafür bezahlt? - -Da es bisher noch keine Robo-Kolumnist:innen gab, gibt es noch keine Gewerkschaft oder Modell-Verträge. Wir haben beschlossen, dass Anic dasselbe Honorar bekommt, wie die menschlichen Kolumnist*innen, die mit Anic im Wechsel schreiben. Für das Geld kaufen wir die Rechenleistung und wenn etwas übrig bleibt, spenden wir es an Atmosfair, um CO2-Emissionen auszugleichen. - -## Hat Anic ein Bewusstsein? - -Die meisten KI-Forschenden sind sich sehr sicher, dass große Sprachmodelle kein Bewusstsein haben oder entwickeln können. Sie seien “reine Statistik“. Wir glauben, ganz so einfach ist die Antwort nicht – zumindest nicht, solange wir nicht genau wissen, was Bewusstsein im menschlichen Gehirn überhaupt ist und ob es nicht auch aus reiner Statistik entsteht. - -Allerdings braucht Anic auch nicht zwingend ein Bewusstsein, um interessante Texte zu schreiben. Entstehen interessante Texte denn im Bewusstsein der Schreibenden oder der Lesenden? Oder irgendwo dazwischen? - -Wir Menschen können nicht nicht interpretieren, also ist es vielleicht fast egal, ob Anic sich beim Schreiben etwas gedacht hat oder nicht. Etwas kommuniziert mit uns wie ein Mensch und manche werden ihm deshalb ein Bewusstsein zuschreiben. - -## Wer hat Anic entwickelt? - -Das Team hinter dem Projekt besteht aus: Marie Kilg, Philipp Meier, Robert Salzer, Theresa Körner, Lukas Graw, Nicholas Utikal, Roland Fischer, Luise Schneider. - -Wir sind eine lose Gruppe von an KI interessierten Menschen. Viele von uns arbeiten seit 2020 in der Turing Agency an Projekten und Veranstaltungen, die sogenannte künstliche Intelligenz für die Gesellschaft zugänglicher machen wollen. (https://www.turingagency.org/) diff --git a/spaces/TuringAgency/anic_gui/assets/index.8d7a4348.js b/spaces/TuringAgency/anic_gui/assets/index.8d7a4348.js deleted file mode 100644 index b92f70cb75d182b603d4ef16d7976bae40fabc41..0000000000000000000000000000000000000000 --- a/spaces/TuringAgency/anic_gui/assets/index.8d7a4348.js +++ /dev/null @@ -1,47 +0,0 @@ -(function(){const t=document.createElement("link").relList;if(t&&t.supports&&t.supports("modulepreload"))return;for(const i of document.querySelectorAll('link[rel="modulepreload"]'))r(i);new MutationObserver(i=>{for(const l of i)if(l.type==="childList")for(const o of l.addedNodes)o.tagName==="LINK"&&o.rel==="modulepreload"&&r(o)}).observe(document,{childList:!0,subtree:!0});function n(i){const l={};return i.integrity&&(l.integrity=i.integrity),i.referrerpolicy&&(l.referrerPolicy=i.referrerpolicy),i.crossorigin==="use-credentials"?l.credentials="include":i.crossorigin==="anonymous"?l.credentials="omit":l.credentials="same-origin",l}function r(i){if(i.ep)return;i.ep=!0;const l=n(i);fetch(i.href,l)}})();var ut=typeof globalThis<"u"?globalThis:typeof window<"u"?window:typeof global<"u"?global:typeof self<"u"?self:{};function Ud(e){return e&&e.__esModule&&Object.prototype.hasOwnProperty.call(e,"default")?e.default:e}var $e={exports:{}},j={};/** - * @license React - * react.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var lr=Symbol.for("react.element"),Md=Symbol.for("react.portal"),zd=Symbol.for("react.fragment"),Vd=Symbol.for("react.strict_mode"),Dd=Symbol.for("react.profiler"),Bd=Symbol.for("react.provider"),$d=Symbol.for("react.context"),Hd=Symbol.for("react.forward_ref"),qd=Symbol.for("react.suspense"),Qd=Symbol.for("react.memo"),Wd=Symbol.for("react.lazy"),Rs=Symbol.iterator;function Kd(e){return e===null||typeof e!="object"?null:(e=Rs&&e[Rs]||e["@@iterator"],typeof e=="function"?e:null)}var ca={isMounted:function(){return!1},enqueueForceUpdate:function(){},enqueueReplaceState:function(){},enqueueSetState:function(){}},da=Object.assign,fa={};function gn(e,t,n){this.props=e,this.context=t,this.refs=fa,this.updater=n||ca}gn.prototype.isReactComponent={};gn.prototype.setState=function(e,t){if(typeof e!="object"&&typeof e!="function"&&e!=null)throw Error("setState(...): takes an object of state variables to update or a function which returns an object of state variables.");this.updater.enqueueSetState(this,e,t,"setState")};gn.prototype.forceUpdate=function(e){this.updater.enqueueForceUpdate(this,e,"forceUpdate")};function pa(){}pa.prototype=gn.prototype;function Ao(e,t,n){this.props=e,this.context=t,this.refs=fa,this.updater=n||ca}var To=Ao.prototype=new pa;To.constructor=Ao;da(To,gn.prototype);To.isPureReactComponent=!0;var js=Array.isArray,ha=Object.prototype.hasOwnProperty,Ro={current:null},ma={key:!0,ref:!0,__self:!0,__source:!0};function va(e,t,n){var r,i={},l=null,o=null;if(t!=null)for(r in t.ref!==void 0&&(o=t.ref),t.key!==void 0&&(l=""+t.key),t)ha.call(t,r)&&!ma.hasOwnProperty(r)&&(i[r]=t[r]);var s=arguments.length-2;if(s===1)i.children=n;else if(1>>1,J=k[Q];if(0>>1;Qi(zi,R))xti(fr,zi)?(k[Q]=fr,k[xt]=R,Q=xt):(k[Q]=zi,k[Ct]=R,Q=Ct);else if(xti(fr,R))k[Q]=fr,k[xt]=R,Q=xt;else break e}}return T}function i(k,T){var R=k.sortIndex-T.sortIndex;return R!==0?R:k.id-T.id}if(typeof performance=="object"&&typeof performance.now=="function"){var l=performance;e.unstable_now=function(){return l.now()}}else{var o=Date,s=o.now();e.unstable_now=function(){return o.now()-s}}var u=[],a=[],d=1,c=null,f=3,m=!1,y=!1,g=!1,P=typeof setTimeout=="function"?setTimeout:null,h=typeof clearTimeout=="function"?clearTimeout:null,p=typeof setImmediate<"u"?setImmediate:null;typeof navigator<"u"&&navigator.scheduling!==void 0&&navigator.scheduling.isInputPending!==void 0&&navigator.scheduling.isInputPending.bind(navigator.scheduling);function v(k){for(var T=n(a);T!==null;){if(T.callback===null)r(a);else if(T.startTime<=k)r(a),T.sortIndex=T.expirationTime,t(u,T);else break;T=n(a)}}function S(k){if(g=!1,v(k),!y)if(n(u)!==null)y=!0,Ui(E);else{var T=n(a);T!==null&&Mi(S,T.startTime-k)}}function E(k,T){y=!1,g&&(g=!1,h(x),x=-1),m=!0;var R=f;try{for(v(T),c=n(u);c!==null&&(!(c.expirationTime>T)||k&&!X());){var Q=c.callback;if(typeof Q=="function"){c.callback=null,f=c.priorityLevel;var J=Q(c.expirationTime<=T);T=e.unstable_now(),typeof J=="function"?c.callback=J:c===n(u)&&r(u),v(T)}else r(u);c=n(u)}if(c!==null)var dr=!0;else{var Ct=n(a);Ct!==null&&Mi(S,Ct.startTime-T),dr=!1}return dr}finally{c=null,f=R,m=!1}}var _=!1,C=null,x=-1,M=5,A=-1;function X(){return!(e.unstable_now()-Ak||125Q?(k.sortIndex=R,t(a,k),n(u)===null&&k===n(a)&&(g?(h(x),x=-1):g=!0,Mi(S,R-Q))):(k.sortIndex=J,t(u,k),y||m||(y=!0,Ui(E))),k},e.unstable_shouldYield=X,e.unstable_wrapCallback=function(k){var T=f;return function(){var R=f;f=T;try{return k.apply(this,arguments)}finally{f=R}}}})(Sa);(function(e){e.exports=Sa})(ga);/** - * @license React - * react-dom.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var wa=$e.exports,ke=ga.exports;function w(e){for(var t="https://reactjs.org/docs/error-decoder.html?invariant="+e,n=1;n"u"||typeof window.document>"u"||typeof window.document.createElement>"u"),jl=Object.prototype.hasOwnProperty,bd=/^[:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD][:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD\-.0-9\u00B7\u0300-\u036F\u203F-\u2040]*$/,Ns={},Ls={};function ef(e){return jl.call(Ls,e)?!0:jl.call(Ns,e)?!1:bd.test(e)?Ls[e]=!0:(Ns[e]=!0,!1)}function tf(e,t,n,r){if(n!==null&&n.type===0)return!1;switch(typeof t){case"function":case"symbol":return!0;case"boolean":return r?!1:n!==null?!n.acceptsBooleans:(e=e.toLowerCase().slice(0,5),e!=="data-"&&e!=="aria-");default:return!1}}function nf(e,t,n,r){if(t===null||typeof t>"u"||tf(e,t,n,r))return!0;if(r)return!1;if(n!==null)switch(n.type){case 3:return!t;case 4:return t===!1;case 5:return isNaN(t);case 6:return isNaN(t)||1>t}return!1}function de(e,t,n,r,i,l,o){this.acceptsBooleans=t===2||t===3||t===4,this.attributeName=r,this.attributeNamespace=i,this.mustUseProperty=n,this.propertyName=e,this.type=t,this.sanitizeURL=l,this.removeEmptyString=o}var re={};"children dangerouslySetInnerHTML defaultValue defaultChecked innerHTML suppressContentEditableWarning suppressHydrationWarning style".split(" ").forEach(function(e){re[e]=new de(e,0,!1,e,null,!1,!1)});[["acceptCharset","accept-charset"],["className","class"],["htmlFor","for"],["httpEquiv","http-equiv"]].forEach(function(e){var t=e[0];re[t]=new de(t,1,!1,e[1],null,!1,!1)});["contentEditable","draggable","spellCheck","value"].forEach(function(e){re[e]=new de(e,2,!1,e.toLowerCase(),null,!1,!1)});["autoReverse","externalResourcesRequired","focusable","preserveAlpha"].forEach(function(e){re[e]=new de(e,2,!1,e,null,!1,!1)});"allowFullScreen async autoFocus autoPlay controls default defer disabled disablePictureInPicture disableRemotePlayback formNoValidate hidden loop noModule noValidate open playsInline readOnly required reversed scoped seamless itemScope".split(" ").forEach(function(e){re[e]=new de(e,3,!1,e.toLowerCase(),null,!1,!1)});["checked","multiple","muted","selected"].forEach(function(e){re[e]=new de(e,3,!0,e,null,!1,!1)});["capture","download"].forEach(function(e){re[e]=new de(e,4,!1,e,null,!1,!1)});["cols","rows","size","span"].forEach(function(e){re[e]=new de(e,6,!1,e,null,!1,!1)});["rowSpan","start"].forEach(function(e){re[e]=new de(e,5,!1,e.toLowerCase(),null,!1,!1)});var Fo=/[\-:]([a-z])/g;function No(e){return e[1].toUpperCase()}"accent-height alignment-baseline arabic-form baseline-shift cap-height clip-path clip-rule color-interpolation color-interpolation-filters color-profile color-rendering dominant-baseline enable-background fill-opacity fill-rule flood-color flood-opacity font-family font-size font-size-adjust font-stretch font-style font-variant font-weight glyph-name glyph-orientation-horizontal glyph-orientation-vertical horiz-adv-x horiz-origin-x image-rendering letter-spacing lighting-color marker-end marker-mid marker-start overline-position overline-thickness paint-order panose-1 pointer-events rendering-intent shape-rendering stop-color stop-opacity strikethrough-position strikethrough-thickness stroke-dasharray stroke-dashoffset stroke-linecap stroke-linejoin stroke-miterlimit stroke-opacity stroke-width text-anchor text-decoration text-rendering underline-position underline-thickness unicode-bidi unicode-range units-per-em v-alphabetic v-hanging v-ideographic v-mathematical vector-effect vert-adv-y vert-origin-x vert-origin-y word-spacing writing-mode xmlns:xlink x-height".split(" ").forEach(function(e){var t=e.replace(Fo,No);re[t]=new de(t,1,!1,e,null,!1,!1)});"xlink:actuate xlink:arcrole xlink:role xlink:show xlink:title xlink:type".split(" ").forEach(function(e){var t=e.replace(Fo,No);re[t]=new de(t,1,!1,e,"http://www.w3.org/1999/xlink",!1,!1)});["xml:base","xml:lang","xml:space"].forEach(function(e){var t=e.replace(Fo,No);re[t]=new de(t,1,!1,e,"http://www.w3.org/XML/1998/namespace",!1,!1)});["tabIndex","crossOrigin"].forEach(function(e){re[e]=new de(e,1,!1,e.toLowerCase(),null,!1,!1)});re.xlinkHref=new de("xlinkHref",1,!1,"xlink:href","http://www.w3.org/1999/xlink",!0,!1);["src","href","action","formAction"].forEach(function(e){re[e]=new de(e,1,!1,e.toLowerCase(),null,!0,!0)});function Lo(e,t,n,r){var i=re.hasOwnProperty(t)?re[t]:null;(i!==null?i.type!==0:r||!(2s||i[o]!==l[s]){var u=` -`+i[o].replace(" at new "," at ");return e.displayName&&u.includes("")&&(u=u.replace("",e.displayName)),u}while(1<=o&&0<=s);break}}}finally{Bi=!1,Error.prepareStackTrace=n}return(e=e?e.displayName||e.name:"")?Tn(e):""}function rf(e){switch(e.tag){case 5:return Tn(e.type);case 16:return Tn("Lazy");case 13:return Tn("Suspense");case 19:return Tn("SuspenseList");case 0:case 2:case 15:return e=$i(e.type,!1),e;case 11:return e=$i(e.type.render,!1),e;case 1:return e=$i(e.type,!0),e;default:return""}}function Il(e){if(e==null)return null;if(typeof e=="function")return e.displayName||e.name||null;if(typeof e=="string")return e;switch(e){case Kt:return"Fragment";case Wt:return"Portal";case Fl:return"Profiler";case Io:return"StrictMode";case Nl:return"Suspense";case Ll:return"SuspenseList"}if(typeof e=="object")switch(e.$$typeof){case ka:return(e.displayName||"Context")+".Consumer";case Oa:return(e._context.displayName||"Context")+".Provider";case Uo:var t=e.render;return e=e.displayName,e||(e=t.displayName||t.name||"",e=e!==""?"ForwardRef("+e+")":"ForwardRef"),e;case Mo:return t=e.displayName||null,t!==null?t:Il(e.type)||"Memo";case it:t=e._payload,e=e._init;try{return Il(e(t))}catch{}}return null}function lf(e){var t=e.type;switch(e.tag){case 24:return"Cache";case 9:return(t.displayName||"Context")+".Consumer";case 10:return(t._context.displayName||"Context")+".Provider";case 18:return"DehydratedFragment";case 11:return e=t.render,e=e.displayName||e.name||"",t.displayName||(e!==""?"ForwardRef("+e+")":"ForwardRef");case 7:return"Fragment";case 5:return t;case 4:return"Portal";case 3:return"Root";case 6:return"Text";case 16:return Il(t);case 8:return t===Io?"StrictMode":"Mode";case 22:return"Offscreen";case 12:return"Profiler";case 21:return"Scope";case 13:return"Suspense";case 19:return"SuspenseList";case 25:return"TracingMarker";case 1:case 0:case 17:case 2:case 14:case 15:if(typeof t=="function")return t.displayName||t.name||null;if(typeof t=="string")return t}return null}function St(e){switch(typeof e){case"boolean":case"number":case"string":case"undefined":return e;case"object":return e;default:return""}}function _a(e){var t=e.type;return(e=e.nodeName)&&e.toLowerCase()==="input"&&(t==="checkbox"||t==="radio")}function of(e){var t=_a(e)?"checked":"value",n=Object.getOwnPropertyDescriptor(e.constructor.prototype,t),r=""+e[t];if(!e.hasOwnProperty(t)&&typeof n<"u"&&typeof n.get=="function"&&typeof n.set=="function"){var i=n.get,l=n.set;return Object.defineProperty(e,t,{configurable:!0,get:function(){return i.call(this)},set:function(o){r=""+o,l.call(this,o)}}),Object.defineProperty(e,t,{enumerable:n.enumerable}),{getValue:function(){return r},setValue:function(o){r=""+o},stopTracking:function(){e._valueTracker=null,delete e[t]}}}}function mr(e){e._valueTracker||(e._valueTracker=of(e))}function Ca(e){if(!e)return!1;var t=e._valueTracker;if(!t)return!0;var n=t.getValue(),r="";return e&&(r=_a(e)?e.checked?"true":"false":e.value),e=r,e!==n?(t.setValue(e),!0):!1}function qr(e){if(e=e||(typeof document<"u"?document:void 0),typeof e>"u")return null;try{return e.activeElement||e.body}catch{return e.body}}function Ul(e,t){var n=t.checked;return $({},t,{defaultChecked:void 0,defaultValue:void 0,value:void 0,checked:n!=null?n:e._wrapperState.initialChecked})}function Us(e,t){var n=t.defaultValue==null?"":t.defaultValue,r=t.checked!=null?t.checked:t.defaultChecked;n=St(t.value!=null?t.value:n),e._wrapperState={initialChecked:r,initialValue:n,controlled:t.type==="checkbox"||t.type==="radio"?t.checked!=null:t.value!=null}}function xa(e,t){t=t.checked,t!=null&&Lo(e,"checked",t,!1)}function Ml(e,t){xa(e,t);var n=St(t.value),r=t.type;if(n!=null)r==="number"?(n===0&&e.value===""||e.value!=n)&&(e.value=""+n):e.value!==""+n&&(e.value=""+n);else if(r==="submit"||r==="reset"){e.removeAttribute("value");return}t.hasOwnProperty("value")?zl(e,t.type,n):t.hasOwnProperty("defaultValue")&&zl(e,t.type,St(t.defaultValue)),t.checked==null&&t.defaultChecked!=null&&(e.defaultChecked=!!t.defaultChecked)}function Ms(e,t,n){if(t.hasOwnProperty("value")||t.hasOwnProperty("defaultValue")){var r=t.type;if(!(r!=="submit"&&r!=="reset"||t.value!==void 0&&t.value!==null))return;t=""+e._wrapperState.initialValue,n||t===e.value||(e.value=t),e.defaultValue=t}n=e.name,n!==""&&(e.name=""),e.defaultChecked=!!e._wrapperState.initialChecked,n!==""&&(e.name=n)}function zl(e,t,n){(t!=="number"||qr(e.ownerDocument)!==e)&&(n==null?e.defaultValue=""+e._wrapperState.initialValue:e.defaultValue!==""+n&&(e.defaultValue=""+n))}var Rn=Array.isArray;function ln(e,t,n,r){if(e=e.options,t){t={};for(var i=0;i"+t.valueOf().toString()+"",t=vr.firstChild;e.firstChild;)e.removeChild(e.firstChild);for(;t.firstChild;)e.appendChild(t.firstChild)}});function Hn(e,t){if(t){var n=e.firstChild;if(n&&n===e.lastChild&&n.nodeType===3){n.nodeValue=t;return}}e.textContent=t}var Nn={animationIterationCount:!0,aspectRatio:!0,borderImageOutset:!0,borderImageSlice:!0,borderImageWidth:!0,boxFlex:!0,boxFlexGroup:!0,boxOrdinalGroup:!0,columnCount:!0,columns:!0,flex:!0,flexGrow:!0,flexPositive:!0,flexShrink:!0,flexNegative:!0,flexOrder:!0,gridArea:!0,gridRow:!0,gridRowEnd:!0,gridRowSpan:!0,gridRowStart:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnSpan:!0,gridColumnStart:!0,fontWeight:!0,lineClamp:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,tabSize:!0,widows:!0,zIndex:!0,zoom:!0,fillOpacity:!0,floodOpacity:!0,stopOpacity:!0,strokeDasharray:!0,strokeDashoffset:!0,strokeMiterlimit:!0,strokeOpacity:!0,strokeWidth:!0},sf=["Webkit","ms","Moz","O"];Object.keys(Nn).forEach(function(e){sf.forEach(function(t){t=t+e.charAt(0).toUpperCase()+e.substring(1),Nn[t]=Nn[e]})});function ja(e,t,n){return t==null||typeof t=="boolean"||t===""?"":n||typeof t!="number"||t===0||Nn.hasOwnProperty(e)&&Nn[e]?(""+t).trim():t+"px"}function Fa(e,t){e=e.style;for(var n in t)if(t.hasOwnProperty(n)){var r=n.indexOf("--")===0,i=ja(n,t[n],r);n==="float"&&(n="cssFloat"),r?e.setProperty(n,i):e[n]=i}}var uf=$({menuitem:!0},{area:!0,base:!0,br:!0,col:!0,embed:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0});function Bl(e,t){if(t){if(uf[e]&&(t.children!=null||t.dangerouslySetInnerHTML!=null))throw Error(w(137,e));if(t.dangerouslySetInnerHTML!=null){if(t.children!=null)throw Error(w(60));if(typeof t.dangerouslySetInnerHTML!="object"||!("__html"in t.dangerouslySetInnerHTML))throw Error(w(61))}if(t.style!=null&&typeof t.style!="object")throw Error(w(62))}}function $l(e,t){if(e.indexOf("-")===-1)return typeof t.is=="string";switch(e){case"annotation-xml":case"color-profile":case"font-face":case"font-face-src":case"font-face-uri":case"font-face-format":case"font-face-name":case"missing-glyph":return!1;default:return!0}}var Hl=null;function zo(e){return e=e.target||e.srcElement||window,e.correspondingUseElement&&(e=e.correspondingUseElement),e.nodeType===3?e.parentNode:e}var ql=null,on=null,sn=null;function Ds(e){if(e=ur(e)){if(typeof ql!="function")throw Error(w(280));var t=e.stateNode;t&&(t=gi(t),ql(e.stateNode,e.type,t))}}function Na(e){on?sn?sn.push(e):sn=[e]:on=e}function La(){if(on){var e=on,t=sn;if(sn=on=null,Ds(e),t)for(e=0;e>>=0,e===0?32:31-(Sf(e)/wf|0)|0}var yr=64,gr=4194304;function jn(e){switch(e&-e){case 1:return 1;case 2:return 2;case 4:return 4;case 8:return 8;case 16:return 16;case 32:return 32;case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:return e&4194240;case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:return e&130023424;case 134217728:return 134217728;case 268435456:return 268435456;case 536870912:return 536870912;case 1073741824:return 1073741824;default:return e}}function Yr(e,t){var n=e.pendingLanes;if(n===0)return 0;var r=0,i=e.suspendedLanes,l=e.pingedLanes,o=n&268435455;if(o!==0){var s=o&~i;s!==0?r=jn(s):(l&=o,l!==0&&(r=jn(l)))}else o=n&~i,o!==0?r=jn(o):l!==0&&(r=jn(l));if(r===0)return 0;if(t!==0&&t!==r&&(t&i)===0&&(i=r&-r,l=t&-t,i>=l||i===16&&(l&4194240)!==0))return t;if((r&4)!==0&&(r|=n&16),t=e.entangledLanes,t!==0)for(e=e.entanglements,t&=r;0n;n++)t.push(e);return t}function or(e,t,n){e.pendingLanes|=t,t!==536870912&&(e.suspendedLanes=0,e.pingedLanes=0),e=e.eventTimes,t=31-Ue(t),e[t]=n}function Pf(e,t){var n=e.pendingLanes&~t;e.pendingLanes=t,e.suspendedLanes=0,e.pingedLanes=0,e.expiredLanes&=t,e.mutableReadLanes&=t,e.entangledLanes&=t,t=e.entanglements;var r=e.eventTimes;for(e=e.expirationTimes;0=In),Gs=String.fromCharCode(32),Xs=!1;function ec(e,t){switch(e){case"keyup":return Zf.indexOf(t.keyCode)!==-1;case"keydown":return t.keyCode!==229;case"keypress":case"mousedown":case"focusout":return!0;default:return!1}}function tc(e){return e=e.detail,typeof e=="object"&&"data"in e?e.data:null}var Yt=!1;function ep(e,t){switch(e){case"compositionend":return tc(t);case"keypress":return t.which!==32?null:(Xs=!0,Gs);case"textInput":return e=t.data,e===Gs&&Xs?null:e;default:return null}}function tp(e,t){if(Yt)return e==="compositionend"||!Wo&&ec(e,t)?(e=Za(),Nr=Ho=at=null,Yt=!1,e):null;switch(e){case"paste":return null;case"keypress":if(!(t.ctrlKey||t.altKey||t.metaKey)||t.ctrlKey&&t.altKey){if(t.char&&1=t)return{node:n,offset:t-e};e=r}e:{for(;n;){if(n.nextSibling){n=n.nextSibling;break e}n=n.parentNode}n=void 0}n=eu(n)}}function lc(e,t){return e&&t?e===t?!0:e&&e.nodeType===3?!1:t&&t.nodeType===3?lc(e,t.parentNode):"contains"in e?e.contains(t):e.compareDocumentPosition?!!(e.compareDocumentPosition(t)&16):!1:!1}function oc(){for(var e=window,t=qr();t instanceof e.HTMLIFrameElement;){try{var n=typeof t.contentWindow.location.href=="string"}catch{n=!1}if(n)e=t.contentWindow;else break;t=qr(e.document)}return t}function Ko(e){var t=e&&e.nodeName&&e.nodeName.toLowerCase();return t&&(t==="input"&&(e.type==="text"||e.type==="search"||e.type==="tel"||e.type==="url"||e.type==="password")||t==="textarea"||e.contentEditable==="true")}function cp(e){var t=oc(),n=e.focusedElem,r=e.selectionRange;if(t!==n&&n&&n.ownerDocument&&lc(n.ownerDocument.documentElement,n)){if(r!==null&&Ko(n)){if(t=r.start,e=r.end,e===void 0&&(e=t),"selectionStart"in n)n.selectionStart=t,n.selectionEnd=Math.min(e,n.value.length);else if(e=(t=n.ownerDocument||document)&&t.defaultView||window,e.getSelection){e=e.getSelection();var i=n.textContent.length,l=Math.min(r.start,i);r=r.end===void 0?l:Math.min(r.end,i),!e.extend&&l>r&&(i=r,r=l,l=i),i=tu(n,l);var o=tu(n,r);i&&o&&(e.rangeCount!==1||e.anchorNode!==i.node||e.anchorOffset!==i.offset||e.focusNode!==o.node||e.focusOffset!==o.offset)&&(t=t.createRange(),t.setStart(i.node,i.offset),e.removeAllRanges(),l>r?(e.addRange(t),e.extend(o.node,o.offset)):(t.setEnd(o.node,o.offset),e.addRange(t)))}}for(t=[],e=n;e=e.parentNode;)e.nodeType===1&&t.push({element:e,left:e.scrollLeft,top:e.scrollTop});for(typeof n.focus=="function"&&n.focus(),n=0;n=document.documentMode,Gt=null,Xl=null,Mn=null,Jl=!1;function nu(e,t,n){var r=n.window===n?n.document:n.nodeType===9?n:n.ownerDocument;Jl||Gt==null||Gt!==qr(r)||(r=Gt,"selectionStart"in r&&Ko(r)?r={start:r.selectionStart,end:r.selectionEnd}:(r=(r.ownerDocument&&r.ownerDocument.defaultView||window).getSelection(),r={anchorNode:r.anchorNode,anchorOffset:r.anchorOffset,focusNode:r.focusNode,focusOffset:r.focusOffset}),Mn&&Gn(Mn,r)||(Mn=r,r=Jr(Xl,"onSelect"),0Zt||(e.current=ro[Zt],ro[Zt]=null,Zt--)}function L(e,t){Zt++,ro[Zt]=e.current,e.current=t}var wt={},se=Ot(wt),he=Ot(!1),It=wt;function fn(e,t){var n=e.type.contextTypes;if(!n)return wt;var r=e.stateNode;if(r&&r.__reactInternalMemoizedUnmaskedChildContext===t)return r.__reactInternalMemoizedMaskedChildContext;var i={},l;for(l in n)i[l]=t[l];return r&&(e=e.stateNode,e.__reactInternalMemoizedUnmaskedChildContext=t,e.__reactInternalMemoizedMaskedChildContext=i),i}function me(e){return e=e.childContextTypes,e!=null}function br(){U(he),U(se)}function au(e,t,n){if(se.current!==wt)throw Error(w(168));L(se,t),L(he,n)}function mc(e,t,n){var r=e.stateNode;if(t=t.childContextTypes,typeof r.getChildContext!="function")return n;r=r.getChildContext();for(var i in r)if(!(i in t))throw Error(w(108,lf(e)||"Unknown",i));return $({},n,r)}function ei(e){return e=(e=e.stateNode)&&e.__reactInternalMemoizedMergedChildContext||wt,It=se.current,L(se,e),L(he,he.current),!0}function cu(e,t,n){var r=e.stateNode;if(!r)throw Error(w(169));n?(e=mc(e,t,It),r.__reactInternalMemoizedMergedChildContext=e,U(he),U(se),L(se,e)):U(he),L(he,n)}var Ye=null,Si=!1,nl=!1;function vc(e){Ye===null?Ye=[e]:Ye.push(e)}function Op(e){Si=!0,vc(e)}function kt(){if(!nl&&Ye!==null){nl=!0;var e=0,t=N;try{var n=Ye;for(N=1;e>=o,i-=o,Ge=1<<32-Ue(t)+i|n<x?(M=C,C=null):M=C.sibling;var A=f(h,C,v[x],S);if(A===null){C===null&&(C=M);break}e&&C&&A.alternate===null&&t(h,C),p=l(A,p,x),_===null?E=A:_.sibling=A,_=A,C=M}if(x===v.length)return n(h,C),V&&At(h,x),E;if(C===null){for(;xx?(M=C,C=null):M=C.sibling;var X=f(h,C,A.value,S);if(X===null){C===null&&(C=M);break}e&&C&&X.alternate===null&&t(h,C),p=l(X,p,x),_===null?E=X:_.sibling=X,_=X,C=M}if(A.done)return n(h,C),V&&At(h,x),E;if(C===null){for(;!A.done;x++,A=v.next())A=c(h,A.value,S),A!==null&&(p=l(A,p,x),_===null?E=A:_.sibling=A,_=A);return V&&At(h,x),E}for(C=r(h,C);!A.done;x++,A=v.next())A=m(C,h,x,A.value,S),A!==null&&(e&&A.alternate!==null&&C.delete(A.key===null?x:A.key),p=l(A,p,x),_===null?E=A:_.sibling=A,_=A);return e&&C.forEach(function(ge){return t(h,ge)}),V&&At(h,x),E}function P(h,p,v,S){if(typeof v=="object"&&v!==null&&v.type===Kt&&v.key===null&&(v=v.props.children),typeof v=="object"&&v!==null){switch(v.$$typeof){case hr:e:{for(var E=v.key,_=p;_!==null;){if(_.key===E){if(E=v.type,E===Kt){if(_.tag===7){n(h,_.sibling),p=i(_,v.props.children),p.return=h,h=p;break e}}else if(_.elementType===E||typeof E=="object"&&E!==null&&E.$$typeof===it&&yu(E)===_.type){n(h,_.sibling),p=i(_,v.props),p.ref=Cn(h,_,v),p.return=h,h=p;break e}n(h,_);break}else t(h,_);_=_.sibling}v.type===Kt?(p=Lt(v.props.children,h.mode,S,v.key),p.return=h,h=p):(S=Br(v.type,v.key,v.props,null,h.mode,S),S.ref=Cn(h,p,v),S.return=h,h=S)}return o(h);case Wt:e:{for(_=v.key;p!==null;){if(p.key===_)if(p.tag===4&&p.stateNode.containerInfo===v.containerInfo&&p.stateNode.implementation===v.implementation){n(h,p.sibling),p=i(p,v.children||[]),p.return=h,h=p;break e}else{n(h,p);break}else t(h,p);p=p.sibling}p=cl(v,h.mode,S),p.return=h,h=p}return o(h);case it:return _=v._init,P(h,p,_(v._payload),S)}if(Rn(v))return y(h,p,v,S);if(En(v))return g(h,p,v,S);_r(h,v)}return typeof v=="string"&&v!==""||typeof v=="number"?(v=""+v,p!==null&&p.tag===6?(n(h,p.sibling),p=i(p,v),p.return=h,h=p):(n(h,p),p=al(v,h.mode,S),p.return=h,h=p),o(h)):n(h,p)}return P}var hn=Pc(!0),_c=Pc(!1),ar={},Qe=Ot(ar),bn=Ot(ar),er=Ot(ar);function Ft(e){if(e===ar)throw Error(w(174));return e}function ns(e,t){switch(L(er,t),L(bn,e),L(Qe,ar),e=t.nodeType,e){case 9:case 11:t=(t=t.documentElement)?t.namespaceURI:Dl(null,"");break;default:e=e===8?t.parentNode:t,t=e.namespaceURI||null,e=e.tagName,t=Dl(t,e)}U(Qe),L(Qe,t)}function mn(){U(Qe),U(bn),U(er)}function Cc(e){Ft(er.current);var t=Ft(Qe.current),n=Dl(t,e.type);t!==n&&(L(bn,e),L(Qe,n))}function rs(e){bn.current===e&&(U(Qe),U(bn))}var D=Ot(0);function oi(e){for(var t=e;t!==null;){if(t.tag===13){var n=t.memoizedState;if(n!==null&&(n=n.dehydrated,n===null||n.data==="$?"||n.data==="$!"))return t}else if(t.tag===19&&t.memoizedProps.revealOrder!==void 0){if((t.flags&128)!==0)return t}else if(t.child!==null){t.child.return=t,t=t.child;continue}if(t===e)break;for(;t.sibling===null;){if(t.return===null||t.return===e)return null;t=t.return}t.sibling.return=t.return,t=t.sibling}return null}var rl=[];function is(){for(var e=0;en?n:4,e(!0);var r=il.transition;il.transition={};try{e(!1),t()}finally{N=n,il.transition=r}}function $c(){return je().memoizedState}function Cp(e,t,n){var r=yt(e);if(n={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null},Hc(e))qc(t,n);else if(n=wc(e,t,n,r),n!==null){var i=ae();Me(n,e,r,i),Qc(n,t,r)}}function xp(e,t,n){var r=yt(e),i={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null};if(Hc(e))qc(t,i);else{var l=e.alternate;if(e.lanes===0&&(l===null||l.lanes===0)&&(l=t.lastRenderedReducer,l!==null))try{var o=t.lastRenderedState,s=l(o,n);if(i.hasEagerState=!0,i.eagerState=s,ze(s,o)){var u=t.interleaved;u===null?(i.next=i,es(t)):(i.next=u.next,u.next=i),t.interleaved=i;return}}catch{}finally{}n=wc(e,t,i,r),n!==null&&(i=ae(),Me(n,e,r,i),Qc(n,t,r))}}function Hc(e){var t=e.alternate;return e===B||t!==null&&t===B}function qc(e,t){zn=si=!0;var n=e.pending;n===null?t.next=t:(t.next=n.next,n.next=t),e.pending=t}function Qc(e,t,n){if((n&4194240)!==0){var r=t.lanes;r&=e.pendingLanes,n|=r,t.lanes=n,Do(e,n)}}var ui={readContext:Re,useCallback:ie,useContext:ie,useEffect:ie,useImperativeHandle:ie,useInsertionEffect:ie,useLayoutEffect:ie,useMemo:ie,useReducer:ie,useRef:ie,useState:ie,useDebugValue:ie,useDeferredValue:ie,useTransition:ie,useMutableSource:ie,useSyncExternalStore:ie,useId:ie,unstable_isNewReconciler:!1},Ap={readContext:Re,useCallback:function(e,t){return De().memoizedState=[e,t===void 0?null:t],e},useContext:Re,useEffect:Su,useImperativeHandle:function(e,t,n){return n=n!=null?n.concat([e]):null,Mr(4194308,4,Mc.bind(null,t,e),n)},useLayoutEffect:function(e,t){return Mr(4194308,4,e,t)},useInsertionEffect:function(e,t){return Mr(4,2,e,t)},useMemo:function(e,t){var n=De();return t=t===void 0?null:t,e=e(),n.memoizedState=[e,t],e},useReducer:function(e,t,n){var r=De();return t=n!==void 0?n(t):t,r.memoizedState=r.baseState=t,e={pending:null,interleaved:null,lanes:0,dispatch:null,lastRenderedReducer:e,lastRenderedState:t},r.queue=e,e=e.dispatch=Cp.bind(null,B,e),[r.memoizedState,e]},useRef:function(e){var t=De();return e={current:e},t.memoizedState=e},useState:gu,useDebugValue:as,useDeferredValue:function(e){return De().memoizedState=e},useTransition:function(){var e=gu(!1),t=e[0];return e=_p.bind(null,e[1]),De().memoizedState=e,[t,e]},useMutableSource:function(){},useSyncExternalStore:function(e,t,n){var r=B,i=De();if(V){if(n===void 0)throw Error(w(407));n=n()}else{if(n=t(),b===null)throw Error(w(349));(Mt&30)!==0||Tc(r,t,n)}i.memoizedState=n;var l={value:n,getSnapshot:t};return i.queue=l,Su(jc.bind(null,r,l,e),[e]),r.flags|=2048,rr(9,Rc.bind(null,r,l,n,t),void 0,null),n},useId:function(){var e=De(),t=b.identifierPrefix;if(V){var n=Xe,r=Ge;n=(r&~(1<<32-Ue(r)-1)).toString(32)+n,t=":"+t+"R"+n,n=tr++,0<\/script>",e=e.removeChild(e.firstChild)):typeof r.is=="string"?e=o.createElement(n,{is:r.is}):(e=o.createElement(n),n==="select"&&(o=e,r.multiple?o.multiple=!0:r.size&&(o.size=r.size))):e=o.createElementNS(e,n),e[He]=t,e[Zn]=r,ed(e,t,!1,!1),t.stateNode=e;e:{switch(o=$l(n,r),n){case"dialog":I("cancel",e),I("close",e),i=r;break;case"iframe":case"object":case"embed":I("load",e),i=r;break;case"video":case"audio":for(i=0;iyn&&(t.flags|=128,r=!0,xn(l,!1),t.lanes=4194304)}else{if(!r)if(e=oi(o),e!==null){if(t.flags|=128,r=!0,n=e.updateQueue,n!==null&&(t.updateQueue=n,t.flags|=4),xn(l,!0),l.tail===null&&l.tailMode==="hidden"&&!o.alternate&&!V)return le(t),null}else 2*W()-l.renderingStartTime>yn&&n!==1073741824&&(t.flags|=128,r=!0,xn(l,!1),t.lanes=4194304);l.isBackwards?(o.sibling=t.child,t.child=o):(n=l.last,n!==null?n.sibling=o:t.child=o,l.last=o)}return l.tail!==null?(t=l.tail,l.rendering=t,l.tail=t.sibling,l.renderingStartTime=W(),t.sibling=null,n=D.current,L(D,r?n&1|2:n&1),t):(le(t),null);case 22:case 23:return ms(),r=t.memoizedState!==null,e!==null&&e.memoizedState!==null!==r&&(t.flags|=8192),r&&(t.mode&1)!==0?(we&1073741824)!==0&&(le(t),t.subtreeFlags&6&&(t.flags|=8192)):le(t),null;case 24:return null;case 25:return null}throw Error(w(156,t.tag))}function Up(e,t){switch(Go(t),t.tag){case 1:return me(t.type)&&br(),e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 3:return mn(),U(he),U(se),is(),e=t.flags,(e&65536)!==0&&(e&128)===0?(t.flags=e&-65537|128,t):null;case 5:return rs(t),null;case 13:if(U(D),e=t.memoizedState,e!==null&&e.dehydrated!==null){if(t.alternate===null)throw Error(w(340));pn()}return e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 19:return U(D),null;case 4:return mn(),null;case 10:return bo(t.type._context),null;case 22:case 23:return ms(),null;case 24:return null;default:return null}}var xr=!1,oe=!1,Mp=typeof WeakSet=="function"?WeakSet:Set,O=null;function nn(e,t){var n=e.ref;if(n!==null)if(typeof n=="function")try{n(null)}catch(r){q(e,t,r)}else n.current=null}function vo(e,t,n){try{n()}catch(r){q(e,t,r)}}var Au=!1;function zp(e,t){if(Zl=Gr,e=oc(),Ko(e)){if("selectionStart"in e)var n={start:e.selectionStart,end:e.selectionEnd};else e:{n=(n=e.ownerDocument)&&n.defaultView||window;var r=n.getSelection&&n.getSelection();if(r&&r.rangeCount!==0){n=r.anchorNode;var i=r.anchorOffset,l=r.focusNode;r=r.focusOffset;try{n.nodeType,l.nodeType}catch{n=null;break e}var o=0,s=-1,u=-1,a=0,d=0,c=e,f=null;t:for(;;){for(var m;c!==n||i!==0&&c.nodeType!==3||(s=o+i),c!==l||r!==0&&c.nodeType!==3||(u=o+r),c.nodeType===3&&(o+=c.nodeValue.length),(m=c.firstChild)!==null;)f=c,c=m;for(;;){if(c===e)break t;if(f===n&&++a===i&&(s=o),f===l&&++d===r&&(u=o),(m=c.nextSibling)!==null)break;c=f,f=c.parentNode}c=m}n=s===-1||u===-1?null:{start:s,end:u}}else n=null}n=n||{start:0,end:0}}else n=null;for(bl={focusedElem:e,selectionRange:n},Gr=!1,O=t;O!==null;)if(t=O,e=t.child,(t.subtreeFlags&1028)!==0&&e!==null)e.return=t,O=e;else for(;O!==null;){t=O;try{var y=t.alternate;if((t.flags&1024)!==0)switch(t.tag){case 0:case 11:case 15:break;case 1:if(y!==null){var g=y.memoizedProps,P=y.memoizedState,h=t.stateNode,p=h.getSnapshotBeforeUpdate(t.elementType===t.type?g:Ne(t.type,g),P);h.__reactInternalSnapshotBeforeUpdate=p}break;case 3:var v=t.stateNode.containerInfo;v.nodeType===1?v.textContent="":v.nodeType===9&&v.documentElement&&v.removeChild(v.documentElement);break;case 5:case 6:case 4:case 17:break;default:throw Error(w(163))}}catch(S){q(t,t.return,S)}if(e=t.sibling,e!==null){e.return=t.return,O=e;break}O=t.return}return y=Au,Au=!1,y}function Vn(e,t,n){var r=t.updateQueue;if(r=r!==null?r.lastEffect:null,r!==null){var i=r=r.next;do{if((i.tag&e)===e){var l=i.destroy;i.destroy=void 0,l!==void 0&&vo(t,n,l)}i=i.next}while(i!==r)}}function Oi(e,t){if(t=t.updateQueue,t=t!==null?t.lastEffect:null,t!==null){var n=t=t.next;do{if((n.tag&e)===e){var r=n.create;n.destroy=r()}n=n.next}while(n!==t)}}function yo(e){var t=e.ref;if(t!==null){var n=e.stateNode;switch(e.tag){case 5:e=n;break;default:e=n}typeof t=="function"?t(e):t.current=e}}function rd(e){var t=e.alternate;t!==null&&(e.alternate=null,rd(t)),e.child=null,e.deletions=null,e.sibling=null,e.tag===5&&(t=e.stateNode,t!==null&&(delete t[He],delete t[Zn],delete t[no],delete t[wp],delete t[Ep])),e.stateNode=null,e.return=null,e.dependencies=null,e.memoizedProps=null,e.memoizedState=null,e.pendingProps=null,e.stateNode=null,e.updateQueue=null}function id(e){return e.tag===5||e.tag===3||e.tag===4}function Tu(e){e:for(;;){for(;e.sibling===null;){if(e.return===null||id(e.return))return null;e=e.return}for(e.sibling.return=e.return,e=e.sibling;e.tag!==5&&e.tag!==6&&e.tag!==18;){if(e.flags&2||e.child===null||e.tag===4)continue e;e.child.return=e,e=e.child}if(!(e.flags&2))return e.stateNode}}function go(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.nodeType===8?n.parentNode.insertBefore(e,t):n.insertBefore(e,t):(n.nodeType===8?(t=n.parentNode,t.insertBefore(e,n)):(t=n,t.appendChild(e)),n=n._reactRootContainer,n!=null||t.onclick!==null||(t.onclick=Zr));else if(r!==4&&(e=e.child,e!==null))for(go(e,t,n),e=e.sibling;e!==null;)go(e,t,n),e=e.sibling}function So(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.insertBefore(e,t):n.appendChild(e);else if(r!==4&&(e=e.child,e!==null))for(So(e,t,n),e=e.sibling;e!==null;)So(e,t,n),e=e.sibling}var ee=null,Le=!1;function rt(e,t,n){for(n=n.child;n!==null;)ld(e,t,n),n=n.sibling}function ld(e,t,n){if(qe&&typeof qe.onCommitFiberUnmount=="function")try{qe.onCommitFiberUnmount(hi,n)}catch{}switch(n.tag){case 5:oe||nn(n,t);case 6:var r=ee,i=Le;ee=null,rt(e,t,n),ee=r,Le=i,ee!==null&&(Le?(e=ee,n=n.stateNode,e.nodeType===8?e.parentNode.removeChild(n):e.removeChild(n)):ee.removeChild(n.stateNode));break;case 18:ee!==null&&(Le?(e=ee,n=n.stateNode,e.nodeType===8?tl(e.parentNode,n):e.nodeType===1&&tl(e,n),Kn(e)):tl(ee,n.stateNode));break;case 4:r=ee,i=Le,ee=n.stateNode.containerInfo,Le=!0,rt(e,t,n),ee=r,Le=i;break;case 0:case 11:case 14:case 15:if(!oe&&(r=n.updateQueue,r!==null&&(r=r.lastEffect,r!==null))){i=r=r.next;do{var l=i,o=l.destroy;l=l.tag,o!==void 0&&((l&2)!==0||(l&4)!==0)&&vo(n,t,o),i=i.next}while(i!==r)}rt(e,t,n);break;case 1:if(!oe&&(nn(n,t),r=n.stateNode,typeof r.componentWillUnmount=="function"))try{r.props=n.memoizedProps,r.state=n.memoizedState,r.componentWillUnmount()}catch(s){q(n,t,s)}rt(e,t,n);break;case 21:rt(e,t,n);break;case 22:n.mode&1?(oe=(r=oe)||n.memoizedState!==null,rt(e,t,n),oe=r):rt(e,t,n);break;default:rt(e,t,n)}}function Ru(e){var t=e.updateQueue;if(t!==null){e.updateQueue=null;var n=e.stateNode;n===null&&(n=e.stateNode=new Mp),t.forEach(function(r){var i=Kp.bind(null,e,r);n.has(r)||(n.add(r),r.then(i,i))})}}function Fe(e,t){var n=t.deletions;if(n!==null)for(var r=0;ri&&(i=o),r&=~l}if(r=i,r=W()-r,r=(120>r?120:480>r?480:1080>r?1080:1920>r?1920:3e3>r?3e3:4320>r?4320:1960*Dp(r/1960))-r,10e?16:e,ct===null)var r=!1;else{if(e=ct,ct=null,di=0,(F&6)!==0)throw Error(w(331));var i=F;for(F|=4,O=e.current;O!==null;){var l=O,o=l.child;if((O.flags&16)!==0){var s=l.deletions;if(s!==null){for(var u=0;uW()-ps?Nt(e,0):fs|=n),ve(e,t)}function pd(e,t){t===0&&((e.mode&1)===0?t=1:(t=gr,gr<<=1,(gr&130023424)===0&&(gr=4194304)));var n=ae();e=et(e,t),e!==null&&(or(e,t,n),ve(e,n))}function Wp(e){var t=e.memoizedState,n=0;t!==null&&(n=t.retryLane),pd(e,n)}function Kp(e,t){var n=0;switch(e.tag){case 13:var r=e.stateNode,i=e.memoizedState;i!==null&&(n=i.retryLane);break;case 19:r=e.stateNode;break;default:throw Error(w(314))}r!==null&&r.delete(t),pd(e,n)}var hd;hd=function(e,t,n){if(e!==null)if(e.memoizedProps!==t.pendingProps||he.current)pe=!0;else{if((e.lanes&n)===0&&(t.flags&128)===0)return pe=!1,Lp(e,t,n);pe=(e.flags&131072)!==0}else pe=!1,V&&(t.flags&1048576)!==0&&yc(t,ni,t.index);switch(t.lanes=0,t.tag){case 2:var r=t.type;zr(e,t),e=t.pendingProps;var i=fn(t,se.current);an(t,n),i=os(null,t,r,e,i,n);var l=ss();return t.flags|=1,typeof i=="object"&&i!==null&&typeof i.render=="function"&&i.$$typeof===void 0?(t.tag=1,t.memoizedState=null,t.updateQueue=null,me(r)?(l=!0,ei(t)):l=!1,t.memoizedState=i.state!==null&&i.state!==void 0?i.state:null,ts(t),i.updater=wi,t.stateNode=i,i._reactInternals=t,uo(t,r,e,n),t=fo(null,t,r,!0,l,n)):(t.tag=0,V&&l&&Yo(t),ue(null,t,i,n),t=t.child),t;case 16:r=t.elementType;e:{switch(zr(e,t),e=t.pendingProps,i=r._init,r=i(r._payload),t.type=r,i=t.tag=Gp(r),e=Ne(r,e),i){case 0:t=co(null,t,r,e,n);break e;case 1:t=_u(null,t,r,e,n);break e;case 11:t=ku(null,t,r,e,n);break e;case 14:t=Pu(null,t,r,Ne(r.type,e),n);break e}throw Error(w(306,r,""))}return t;case 0:return r=t.type,i=t.pendingProps,i=t.elementType===r?i:Ne(r,i),co(e,t,r,i,n);case 1:return r=t.type,i=t.pendingProps,i=t.elementType===r?i:Ne(r,i),_u(e,t,r,i,n);case 3:e:{if(Jc(t),e===null)throw Error(w(387));r=t.pendingProps,l=t.memoizedState,i=l.element,Ec(e,t),li(t,r,null,n);var o=t.memoizedState;if(r=o.element,l.isDehydrated)if(l={element:r,isDehydrated:!1,cache:o.cache,pendingSuspenseBoundaries:o.pendingSuspenseBoundaries,transitions:o.transitions},t.updateQueue.baseState=l,t.memoizedState=l,t.flags&256){i=vn(Error(w(423)),t),t=Cu(e,t,r,n,i);break e}else if(r!==i){i=vn(Error(w(424)),t),t=Cu(e,t,r,n,i);break e}else for(Ee=ht(t.stateNode.containerInfo.firstChild),Oe=t,V=!0,Ie=null,n=_c(t,null,r,n),t.child=n;n;)n.flags=n.flags&-3|4096,n=n.sibling;else{if(pn(),r===i){t=tt(e,t,n);break e}ue(e,t,r,n)}t=t.child}return t;case 5:return Cc(t),e===null&&lo(t),r=t.type,i=t.pendingProps,l=e!==null?e.memoizedProps:null,o=i.children,eo(r,i)?o=null:l!==null&&eo(r,l)&&(t.flags|=32),Xc(e,t),ue(e,t,o,n),t.child;case 6:return e===null&&lo(t),null;case 13:return Zc(e,t,n);case 4:return ns(t,t.stateNode.containerInfo),r=t.pendingProps,e===null?t.child=hn(t,null,r,n):ue(e,t,r,n),t.child;case 11:return r=t.type,i=t.pendingProps,i=t.elementType===r?i:Ne(r,i),ku(e,t,r,i,n);case 7:return ue(e,t,t.pendingProps,n),t.child;case 8:return ue(e,t,t.pendingProps.children,n),t.child;case 12:return ue(e,t,t.pendingProps.children,n),t.child;case 10:e:{if(r=t.type._context,i=t.pendingProps,l=t.memoizedProps,o=i.value,L(ri,r._currentValue),r._currentValue=o,l!==null)if(ze(l.value,o)){if(l.children===i.children&&!he.current){t=tt(e,t,n);break e}}else for(l=t.child,l!==null&&(l.return=t);l!==null;){var s=l.dependencies;if(s!==null){o=l.child;for(var u=s.firstContext;u!==null;){if(u.context===r){if(l.tag===1){u=Je(-1,n&-n),u.tag=2;var a=l.updateQueue;if(a!==null){a=a.shared;var d=a.pending;d===null?u.next=u:(u.next=d.next,d.next=u),a.pending=u}}l.lanes|=n,u=l.alternate,u!==null&&(u.lanes|=n),oo(l.return,n,t),s.lanes|=n;break}u=u.next}}else if(l.tag===10)o=l.type===t.type?null:l.child;else if(l.tag===18){if(o=l.return,o===null)throw Error(w(341));o.lanes|=n,s=o.alternate,s!==null&&(s.lanes|=n),oo(o,n,t),o=l.sibling}else o=l.child;if(o!==null)o.return=l;else for(o=l;o!==null;){if(o===t){o=null;break}if(l=o.sibling,l!==null){l.return=o.return,o=l;break}o=o.return}l=o}ue(e,t,i.children,n),t=t.child}return t;case 9:return i=t.type,r=t.pendingProps.children,an(t,n),i=Re(i),r=r(i),t.flags|=1,ue(e,t,r,n),t.child;case 14:return r=t.type,i=Ne(r,t.pendingProps),i=Ne(r.type,i),Pu(e,t,r,i,n);case 15:return Yc(e,t,t.type,t.pendingProps,n);case 17:return r=t.type,i=t.pendingProps,i=t.elementType===r?i:Ne(r,i),zr(e,t),t.tag=1,me(r)?(e=!0,ei(t)):e=!1,an(t,n),kc(t,r,i),uo(t,r,i,n),fo(null,t,r,!0,e,n);case 19:return bc(e,t,n);case 22:return Gc(e,t,n)}throw Error(w(156,t.tag))};function md(e,t){return Ba(e,t)}function Yp(e,t,n,r){this.tag=e,this.key=n,this.sibling=this.child=this.return=this.stateNode=this.type=this.elementType=null,this.index=0,this.ref=null,this.pendingProps=t,this.dependencies=this.memoizedState=this.updateQueue=this.memoizedProps=null,this.mode=r,this.subtreeFlags=this.flags=0,this.deletions=null,this.childLanes=this.lanes=0,this.alternate=null}function Ae(e,t,n,r){return new Yp(e,t,n,r)}function ys(e){return e=e.prototype,!(!e||!e.isReactComponent)}function Gp(e){if(typeof e=="function")return ys(e)?1:0;if(e!=null){if(e=e.$$typeof,e===Uo)return 11;if(e===Mo)return 14}return 2}function gt(e,t){var n=e.alternate;return n===null?(n=Ae(e.tag,t,e.key,e.mode),n.elementType=e.elementType,n.type=e.type,n.stateNode=e.stateNode,n.alternate=e,e.alternate=n):(n.pendingProps=t,n.type=e.type,n.flags=0,n.subtreeFlags=0,n.deletions=null),n.flags=e.flags&14680064,n.childLanes=e.childLanes,n.lanes=e.lanes,n.child=e.child,n.memoizedProps=e.memoizedProps,n.memoizedState=e.memoizedState,n.updateQueue=e.updateQueue,t=e.dependencies,n.dependencies=t===null?null:{lanes:t.lanes,firstContext:t.firstContext},n.sibling=e.sibling,n.index=e.index,n.ref=e.ref,n}function Br(e,t,n,r,i,l){var o=2;if(r=e,typeof e=="function")ys(e)&&(o=1);else if(typeof e=="string")o=5;else e:switch(e){case Kt:return Lt(n.children,i,l,t);case Io:o=8,i|=8;break;case Fl:return e=Ae(12,n,t,i|2),e.elementType=Fl,e.lanes=l,e;case Nl:return e=Ae(13,n,t,i),e.elementType=Nl,e.lanes=l,e;case Ll:return e=Ae(19,n,t,i),e.elementType=Ll,e.lanes=l,e;case Pa:return Pi(n,i,l,t);default:if(typeof e=="object"&&e!==null)switch(e.$$typeof){case Oa:o=10;break e;case ka:o=9;break e;case Uo:o=11;break e;case Mo:o=14;break e;case it:o=16,r=null;break e}throw Error(w(130,e==null?e:typeof e,""))}return t=Ae(o,n,t,i),t.elementType=e,t.type=r,t.lanes=l,t}function Lt(e,t,n,r){return e=Ae(7,e,r,t),e.lanes=n,e}function Pi(e,t,n,r){return e=Ae(22,e,r,t),e.elementType=Pa,e.lanes=n,e.stateNode={isHidden:!1},e}function al(e,t,n){return e=Ae(6,e,null,t),e.lanes=n,e}function cl(e,t,n){return t=Ae(4,e.children!==null?e.children:[],e.key,t),t.lanes=n,t.stateNode={containerInfo:e.containerInfo,pendingChildren:null,implementation:e.implementation},t}function Xp(e,t,n,r,i){this.tag=t,this.containerInfo=e,this.finishedWork=this.pingCache=this.current=this.pendingChildren=null,this.timeoutHandle=-1,this.callbackNode=this.pendingContext=this.context=null,this.callbackPriority=0,this.eventTimes=qi(0),this.expirationTimes=qi(-1),this.entangledLanes=this.finishedLanes=this.mutableReadLanes=this.expiredLanes=this.pingedLanes=this.suspendedLanes=this.pendingLanes=0,this.entanglements=qi(0),this.identifierPrefix=r,this.onRecoverableError=i,this.mutableSourceEagerHydrationData=null}function gs(e,t,n,r,i,l,o,s,u){return e=new Xp(e,t,n,s,u),t===1?(t=1,l===!0&&(t|=8)):t=0,l=Ae(3,null,null,t),e.current=l,l.stateNode=e,l.memoizedState={element:r,isDehydrated:n,cache:null,transitions:null,pendingSuspenseBoundaries:null},ts(l),e}function Jp(e,t,n){var r=3"u"||typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE!="function"))try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(t)}catch(n){console.error(n)}}t(),e.exports=Pe})(ya);var zu=ya.exports;Rl.createRoot=zu.createRoot,Rl.hydrateRoot=zu.hydrateRoot;var Po={},Sd={},Os={exports:{}},ks={exports:{}},wd=function(t,n){return function(){for(var i=new Array(arguments.length),l=0;l"u"}function rh(e){return e!==null&&!_o(e)&&e.constructor!==null&&!_o(e.constructor)&&typeof e.constructor.isBuffer=="function"&&e.constructor.isBuffer(e)}function Ed(e){return Pt.call(e)==="[object ArrayBuffer]"}function ih(e){return Pt.call(e)==="[object FormData]"}function lh(e){var t;return typeof ArrayBuffer<"u"&&ArrayBuffer.isView?t=ArrayBuffer.isView(e):t=e&&e.buffer&&Ed(e.buffer),t}function oh(e){return typeof e=="string"}function sh(e){return typeof e=="number"}function Od(e){return e!==null&&typeof e=="object"}function $r(e){if(Pt.call(e)!=="[object Object]")return!1;var t=Object.getPrototypeOf(e);return t===null||t===Object.prototype}function uh(e){return Pt.call(e)==="[object Date]"}function ah(e){return Pt.call(e)==="[object File]"}function ch(e){return Pt.call(e)==="[object Blob]"}function kd(e){return Pt.call(e)==="[object Function]"}function dh(e){return Od(e)&&kd(e.pipe)}function fh(e){return Pt.call(e)==="[object URLSearchParams]"}function ph(e){return e.trim?e.trim():e.replace(/^\s+|\s+$/g,"")}function hh(){return typeof navigator<"u"&&(navigator.product==="ReactNative"||navigator.product==="NativeScript"||navigator.product==="NS")?!1:typeof window<"u"&&typeof document<"u"}function _s(e,t){if(!(e===null||typeof e>"u"))if(typeof e!="object"&&(e=[e]),Ps(e))for(var n=0,r=e.length;n"u"||(qt.isArray(u)?a=a+"[]":u=[u],qt.forEach(u,function(c){qt.isDate(c)?c=c.toISOString():qt.isObject(c)&&(c=JSON.stringify(c)),l.push(Vu(a)+"="+Vu(c))}))}),i=l.join("&")}if(i){var o=t.indexOf("#");o!==-1&&(t=t.slice(0,o)),t+=(t.indexOf("?")===-1?"?":"&")+i}return t},yh=ye;function Ti(){this.handlers=[]}Ti.prototype.use=function(t,n,r){return this.handlers.push({fulfilled:t,rejected:n,synchronous:r?r.synchronous:!1,runWhen:r?r.runWhen:null}),this.handlers.length-1};Ti.prototype.eject=function(t){this.handlers[t]&&(this.handlers[t]=null)};Ti.prototype.forEach=function(t){yh.forEach(this.handlers,function(r){r!==null&&t(r)})};var gh=Ti,Sh=ye,wh=function(t,n){Sh.forEach(t,function(i,l){l!==n&&l.toUpperCase()===n.toUpperCase()&&(t[n]=i,delete t[l])})},_d=function(t,n,r,i,l){return t.config=n,r&&(t.code=r),t.request=i,t.response=l,t.isAxiosError=!0,t.toJSON=function(){return{message:this.message,name:this.name,description:this.description,number:this.number,fileName:this.fileName,lineNumber:this.lineNumber,columnNumber:this.columnNumber,stack:this.stack,config:this.config,code:this.code,status:this.response&&this.response.status?this.response.status:null}},t},Cd={silentJSONParsing:!0,forcedJSONParsing:!0,clarifyTimeoutError:!1},dl,Du;function xd(){if(Du)return dl;Du=1;var e=_d;return dl=function(n,r,i,l,o){var s=new Error(n);return e(s,r,i,l,o)},dl}var fl,Bu;function Eh(){if(Bu)return fl;Bu=1;var e=xd();return fl=function(n,r,i){var l=i.config.validateStatus;!i.status||!l||l(i.status)?n(i):r(e("Request failed with status code "+i.status,i.config,null,i.request,i))},fl}var pl,$u;function Oh(){if($u)return pl;$u=1;var e=ye;return pl=e.isStandardBrowserEnv()?function(){return{write:function(r,i,l,o,s,u){var a=[];a.push(r+"="+encodeURIComponent(i)),e.isNumber(l)&&a.push("expires="+new Date(l).toGMTString()),e.isString(o)&&a.push("path="+o),e.isString(s)&&a.push("domain="+s),u===!0&&a.push("secure"),document.cookie=a.join("; ")},read:function(r){var i=document.cookie.match(new RegExp("(^|;\\s*)("+r+")=([^;]*)"));return i?decodeURIComponent(i[3]):null},remove:function(r){this.write(r,"",Date.now()-864e5)}}}():function(){return{write:function(){},read:function(){return null},remove:function(){}}}(),pl}var hl,Hu;function kh(){return Hu||(Hu=1,hl=function(t){return/^([a-z][a-z\d+\-.]*:)?\/\//i.test(t)}),hl}var ml,qu;function Ph(){return qu||(qu=1,ml=function(t,n){return n?t.replace(/\/+$/,"")+"/"+n.replace(/^\/+/,""):t}),ml}var vl,Qu;function _h(){if(Qu)return vl;Qu=1;var e=kh(),t=Ph();return vl=function(r,i){return r&&!e(i)?t(r,i):i},vl}var yl,Wu;function Ch(){if(Wu)return yl;Wu=1;var e=ye,t=["age","authorization","content-length","content-type","etag","expires","from","host","if-modified-since","if-unmodified-since","last-modified","location","max-forwards","proxy-authorization","referer","retry-after","user-agent"];return yl=function(r){var i={},l,o,s;return r&&e.forEach(r.split(` -`),function(a){if(s=a.indexOf(":"),l=e.trim(a.substr(0,s)).toLowerCase(),o=e.trim(a.substr(s+1)),l){if(i[l]&&t.indexOf(l)>=0)return;l==="set-cookie"?i[l]=(i[l]?i[l]:[]).concat([o]):i[l]=i[l]?i[l]+", "+o:o}}),i},yl}var gl,Ku;function xh(){if(Ku)return gl;Ku=1;var e=ye;return gl=e.isStandardBrowserEnv()?function(){var n=/(msie|trident)/i.test(navigator.userAgent),r=document.createElement("a"),i;function l(o){var s=o;return n&&(r.setAttribute("href",s),s=r.href),r.setAttribute("href",s),{href:r.href,protocol:r.protocol?r.protocol.replace(/:$/,""):"",host:r.host,search:r.search?r.search.replace(/^\?/,""):"",hash:r.hash?r.hash.replace(/^#/,""):"",hostname:r.hostname,port:r.port,pathname:r.pathname.charAt(0)==="/"?r.pathname:"/"+r.pathname}}return i=l(window.location.href),function(s){var u=e.isString(s)?l(s):s;return u.protocol===i.protocol&&u.host===i.host}}():function(){return function(){return!0}}(),gl}var Sl,Yu;function Ri(){if(Yu)return Sl;Yu=1;function e(t){this.message=t}return e.prototype.toString=function(){return"Cancel"+(this.message?": "+this.message:"")},e.prototype.__CANCEL__=!0,Sl=e,Sl}var wl,Gu;function Xu(){if(Gu)return wl;Gu=1;var e=ye,t=Eh(),n=Oh(),r=Pd,i=_h(),l=Ch(),o=xh(),s=xd(),u=Cd,a=Ri();return wl=function(c){return new Promise(function(m,y){var g=c.data,P=c.headers,h=c.responseType,p;function v(){c.cancelToken&&c.cancelToken.unsubscribe(p),c.signal&&c.signal.removeEventListener("abort",p)}e.isFormData(g)&&delete P["Content-Type"];var S=new XMLHttpRequest;if(c.auth){var E=c.auth.username||"",_=c.auth.password?unescape(encodeURIComponent(c.auth.password)):"";P.Authorization="Basic "+btoa(E+":"+_)}var C=i(c.baseURL,c.url);S.open(c.method.toUpperCase(),r(C,c.params,c.paramsSerializer),!0),S.timeout=c.timeout;function x(){if(!!S){var A="getAllResponseHeaders"in S?l(S.getAllResponseHeaders()):null,X=!h||h==="text"||h==="json"?S.responseText:S.response,ge={data:X,status:S.status,statusText:S.statusText,headers:A,config:c,request:S};t(function($t){m($t),v()},function($t){y($t),v()},ge),S=null}}if("onloadend"in S?S.onloadend=x:S.onreadystatechange=function(){!S||S.readyState!==4||S.status===0&&!(S.responseURL&&S.responseURL.indexOf("file:")===0)||setTimeout(x)},S.onabort=function(){!S||(y(s("Request aborted",c,"ECONNABORTED",S)),S=null)},S.onerror=function(){y(s("Network Error",c,null,S)),S=null},S.ontimeout=function(){var X=c.timeout?"timeout of "+c.timeout+"ms exceeded":"timeout exceeded",ge=c.transitional||u;c.timeoutErrorMessage&&(X=c.timeoutErrorMessage),y(s(X,c,ge.clarifyTimeoutError?"ETIMEDOUT":"ECONNABORTED",S)),S=null},e.isStandardBrowserEnv()){var M=(c.withCredentials||o(C))&&c.xsrfCookieName?n.read(c.xsrfCookieName):void 0;M&&(P[c.xsrfHeaderName]=M)}"setRequestHeader"in S&&e.forEach(P,function(X,ge){typeof g>"u"&&ge.toLowerCase()==="content-type"?delete P[ge]:S.setRequestHeader(ge,X)}),e.isUndefined(c.withCredentials)||(S.withCredentials=!!c.withCredentials),h&&h!=="json"&&(S.responseType=c.responseType),typeof c.onDownloadProgress=="function"&&S.addEventListener("progress",c.onDownloadProgress),typeof c.onUploadProgress=="function"&&S.upload&&S.upload.addEventListener("progress",c.onUploadProgress),(c.cancelToken||c.signal)&&(p=function(A){!S||(y(!A||A&&A.type?new a("canceled"):A),S.abort(),S=null)},c.cancelToken&&c.cancelToken.subscribe(p),c.signal&&(c.signal.aborted?p():c.signal.addEventListener("abort",p))),g||(g=null),S.send(g)})},wl}var te=ye,Ju=wh,Ah=_d,Th=Cd,Rh={"Content-Type":"application/x-www-form-urlencoded"};function Zu(e,t){!te.isUndefined(e)&&te.isUndefined(e["Content-Type"])&&(e["Content-Type"]=t)}function jh(){var e;return(typeof XMLHttpRequest<"u"||typeof process<"u"&&Object.prototype.toString.call(process)==="[object process]")&&(e=Xu()),e}function Fh(e,t,n){if(te.isString(e))try{return(t||JSON.parse)(e),te.trim(e)}catch(r){if(r.name!=="SyntaxError")throw r}return(n||JSON.stringify)(e)}var ji={transitional:Th,adapter:jh(),transformRequest:[function(t,n){return Ju(n,"Accept"),Ju(n,"Content-Type"),te.isFormData(t)||te.isArrayBuffer(t)||te.isBuffer(t)||te.isStream(t)||te.isFile(t)||te.isBlob(t)?t:te.isArrayBufferView(t)?t.buffer:te.isURLSearchParams(t)?(Zu(n,"application/x-www-form-urlencoded;charset=utf-8"),t.toString()):te.isObject(t)||n&&n["Content-Type"]==="application/json"?(Zu(n,"application/json"),Fh(t)):t}],transformResponse:[function(t){var n=this.transitional||ji.transitional,r=n&&n.silentJSONParsing,i=n&&n.forcedJSONParsing,l=!r&&this.responseType==="json";if(l||i&&te.isString(t)&&t.length)try{return JSON.parse(t)}catch(o){if(l)throw o.name==="SyntaxError"?Ah(o,this,"E_JSON_PARSE"):o}return t}],timeout:0,xsrfCookieName:"XSRF-TOKEN",xsrfHeaderName:"X-XSRF-TOKEN",maxContentLength:-1,maxBodyLength:-1,validateStatus:function(t){return t>=200&&t<300},headers:{common:{Accept:"application/json, text/plain, */*"}}};te.forEach(["delete","get","head"],function(t){ji.headers[t]={}});te.forEach(["post","put","patch"],function(t){ji.headers[t]=te.merge(Rh)});var Cs=ji,Nh=ye,Lh=Cs,Ih=function(t,n,r){var i=this||Lh;return Nh.forEach(r,function(o){t=o.call(i,t,n)}),t},El,bu;function Ad(){return bu||(bu=1,El=function(t){return!!(t&&t.__CANCEL__)}),El}var ea=ye,Ol=Ih,Uh=Ad(),Mh=Cs,zh=Ri();function kl(e){if(e.cancelToken&&e.cancelToken.throwIfRequested(),e.signal&&e.signal.aborted)throw new zh("canceled")}var Vh=function(t){kl(t),t.headers=t.headers||{},t.data=Ol.call(t,t.data,t.headers,t.transformRequest),t.headers=ea.merge(t.headers.common||{},t.headers[t.method]||{},t.headers),ea.forEach(["delete","get","head","post","put","patch","common"],function(i){delete t.headers[i]});var n=t.adapter||Mh.adapter;return n(t).then(function(i){return kl(t),i.data=Ol.call(t,i.data,i.headers,t.transformResponse),i},function(i){return Uh(i)||(kl(t),i&&i.response&&(i.response.data=Ol.call(t,i.response.data,i.response.headers,t.transformResponse))),Promise.reject(i)})},Se=ye,Td=function(t,n){n=n||{};var r={};function i(d,c){return Se.isPlainObject(d)&&Se.isPlainObject(c)?Se.merge(d,c):Se.isPlainObject(c)?Se.merge({},c):Se.isArray(c)?c.slice():c}function l(d){if(Se.isUndefined(n[d])){if(!Se.isUndefined(t[d]))return i(void 0,t[d])}else return i(t[d],n[d])}function o(d){if(!Se.isUndefined(n[d]))return i(void 0,n[d])}function s(d){if(Se.isUndefined(n[d])){if(!Se.isUndefined(t[d]))return i(void 0,t[d])}else return i(void 0,n[d])}function u(d){if(d in n)return i(t[d],n[d]);if(d in t)return i(void 0,t[d])}var a={url:o,method:o,data:o,baseURL:s,transformRequest:s,transformResponse:s,paramsSerializer:s,timeout:s,timeoutMessage:s,withCredentials:s,adapter:s,responseType:s,xsrfCookieName:s,xsrfHeaderName:s,onUploadProgress:s,onDownloadProgress:s,decompress:s,maxContentLength:s,maxBodyLength:s,transport:s,httpAgent:s,httpsAgent:s,cancelToken:s,socketPath:s,responseEncoding:s,validateStatus:u};return Se.forEach(Object.keys(t).concat(Object.keys(n)),function(c){var f=a[c]||l,m=f(c);Se.isUndefined(m)&&f!==u||(r[c]=m)}),r},Pl,ta;function Rd(){return ta||(ta=1,Pl={version:"0.26.1"}),Pl}var Dh=Rd().version,xs={};["object","boolean","number","function","string","symbol"].forEach(function(e,t){xs[e]=function(r){return typeof r===e||"a"+(t<1?"n ":" ")+e}});var na={};xs.transitional=function(t,n,r){function i(l,o){return"[Axios v"+Dh+"] Transitional option '"+l+"'"+o+(r?". "+r:"")}return function(l,o,s){if(t===!1)throw new Error(i(o," has been removed"+(n?" in "+n:"")));return n&&!na[o]&&(na[o]=!0,console.warn(i(o," has been deprecated since v"+n+" and will be removed in the near future"))),t?t(l,o,s):!0}};function Bh(e,t,n){if(typeof e!="object")throw new TypeError("options must be an object");for(var r=Object.keys(e),i=r.length;i-- >0;){var l=r[i],o=t[l];if(o){var s=e[l],u=s===void 0||o(s,l,e);if(u!==!0)throw new TypeError("option "+l+" must be "+u);continue}if(n!==!0)throw Error("Unknown option "+l)}}var $h={assertOptions:Bh,validators:xs},jd=ye,Hh=Pd,ra=gh,ia=Vh,Fi=Td,Fd=$h,Qt=Fd.validators;function cr(e){this.defaults=e,this.interceptors={request:new ra,response:new ra}}cr.prototype.request=function(t,n){typeof t=="string"?(n=n||{},n.url=t):n=t||{},n=Fi(this.defaults,n),n.method?n.method=n.method.toLowerCase():this.defaults.method?n.method=this.defaults.method.toLowerCase():n.method="get";var r=n.transitional;r!==void 0&&Fd.assertOptions(r,{silentJSONParsing:Qt.transitional(Qt.boolean),forcedJSONParsing:Qt.transitional(Qt.boolean),clarifyTimeoutError:Qt.transitional(Qt.boolean)},!1);var i=[],l=!0;this.interceptors.request.forEach(function(m){typeof m.runWhen=="function"&&m.runWhen(n)===!1||(l=l&&m.synchronous,i.unshift(m.fulfilled,m.rejected))});var o=[];this.interceptors.response.forEach(function(m){o.push(m.fulfilled,m.rejected)});var s;if(!l){var u=[ia,void 0];for(Array.prototype.unshift.apply(u,i),u=u.concat(o),s=Promise.resolve(n);u.length;)s=s.then(u.shift(),u.shift());return s}for(var a=n;i.length;){var d=i.shift(),c=i.shift();try{a=d(a)}catch(f){c(f);break}}try{s=ia(a)}catch(f){return Promise.reject(f)}for(;o.length;)s=s.then(o.shift(),o.shift());return s};cr.prototype.getUri=function(t){return t=Fi(this.defaults,t),Hh(t.url,t.params,t.paramsSerializer).replace(/^\?/,"")};jd.forEach(["delete","get","head","options"],function(t){cr.prototype[t]=function(n,r){return this.request(Fi(r||{},{method:t,url:n,data:(r||{}).data}))}});jd.forEach(["post","put","patch"],function(t){cr.prototype[t]=function(n,r,i){return this.request(Fi(i||{},{method:t,url:n,data:r}))}});var qh=cr,_l,la;function Qh(){if(la)return _l;la=1;var e=Ri();function t(n){if(typeof n!="function")throw new TypeError("executor must be a function.");var r;this.promise=new Promise(function(o){r=o});var i=this;this.promise.then(function(l){if(!!i._listeners){var o,s=i._listeners.length;for(o=0;oxo(e,r,n)):Object.keys(t).forEach(r=>xo(e,t[r],`${n}${n!==""?".":""}${r}`)):e.has(n)?e.append(n,t):e.set(n,t)}H.setSearchParams=function(e,...t){const n=new URLSearchParams(e.search);xo(n,t),e.search=n.toString()};H.serializeDataIfNeeded=function(e,t,n){const r=typeof e!="string";return(r&&n&&n.isJsonMime?n.isJsonMime(t.headers["Content-Type"]):r)?JSON.stringify(e!==void 0?e:{}):e||""};H.toPathString=function(e){return e.pathname+e.search+e.hash};H.createRequestFunction=function(e,t,n,r){return(i=t,l=n)=>{const o=Object.assign(Object.assign({},e.options),{url:((r==null?void 0:r.basePath)||l)+e.url});return i.request(o)}};(function(e){var t=ut&&ut.__awaiter||function(o,s,u,a){function d(c){return c instanceof u?c:new u(function(f){f(c)})}return new(u||(u=Promise))(function(c,f){function m(P){try{g(a.next(P))}catch(h){f(h)}}function y(P){try{g(a.throw(P))}catch(h){f(h)}}function g(P){P.done?c(P.value):d(P.value).then(m,y)}g((a=a.apply(o,s||[])).next())})};Object.defineProperty(e,"__esModule",{value:!0}),e.OpenAIApi=e.OpenAIApiFactory=e.OpenAIApiFp=e.OpenAIApiAxiosParamCreator=e.CreateImageRequestResponseFormatEnum=e.CreateImageRequestSizeEnum=void 0;const n=Os.exports,r=H,i=As;e.CreateImageRequestSizeEnum={_256x256:"256x256",_512x512:"512x512",_1024x1024:"1024x1024"},e.CreateImageRequestResponseFormatEnum={Url:"url",B64Json:"b64_json"},e.OpenAIApiAxiosParamCreator=function(o){return{cancelFineTune:(s,u={})=>t(this,void 0,void 0,function*(){r.assertParamExists("cancelFineTune","fineTuneId",s);const a="/fine-tunes/{fine_tune_id}/cancel".replace("{fine_tune_id}",encodeURIComponent(String(s))),d=new URL(a,r.DUMMY_BASE_URL);let c;o&&(c=o.baseOptions);const f=Object.assign(Object.assign({method:"POST"},c),u),m={},y={};r.setSearchParams(d,y);let g=c&&c.headers?c.headers:{};return f.headers=Object.assign(Object.assign(Object.assign({},m),g),u.headers),{url:r.toPathString(d),options:f}}),createAnswer:(s,u={})=>t(this,void 0,void 0,function*(){r.assertParamExists("createAnswer","createAnswerRequest",s);const a="/answers",d=new URL(a,r.DUMMY_BASE_URL);let c;o&&(c=o.baseOptions);const f=Object.assign(Object.assign({method:"POST"},c),u),m={},y={};m["Content-Type"]="application/json",r.setSearchParams(d,y);let g=c&&c.headers?c.headers:{};return f.headers=Object.assign(Object.assign(Object.assign({},m),g),u.headers),f.data=r.serializeDataIfNeeded(s,f,o),{url:r.toPathString(d),options:f}}),createClassification:(s,u={})=>t(this,void 0,void 0,function*(){r.assertParamExists("createClassification","createClassificationRequest",s);const a="/classifications",d=new URL(a,r.DUMMY_BASE_URL);let c;o&&(c=o.baseOptions);const f=Object.assign(Object.assign({method:"POST"},c),u),m={},y={};m["Content-Type"]="application/json",r.setSearchParams(d,y);let g=c&&c.headers?c.headers:{};return f.headers=Object.assign(Object.assign(Object.assign({},m),g),u.headers),f.data=r.serializeDataIfNeeded(s,f,o),{url:r.toPathString(d),options:f}}),createCompletion:(s,u={})=>t(this,void 0,void 0,function*(){r.assertParamExists("createCompletion","createCompletionRequest",s);const a="/completions",d=new URL(a,r.DUMMY_BASE_URL);let c;o&&(c=o.baseOptions);const f=Object.assign(Object.assign({method:"POST"},c),u),m={},y={};m["Content-Type"]="application/json",r.setSearchParams(d,y);let g=c&&c.headers?c.headers:{};return f.headers=Object.assign(Object.assign(Object.assign({},m),g),u.headers),f.data=r.serializeDataIfNeeded(s,f,o),{url:r.toPathString(d),options:f}}),createEdit:(s,u={})=>t(this,void 0,void 0,function*(){r.assertParamExists("createEdit","createEditRequest",s);const a="/edits",d=new URL(a,r.DUMMY_BASE_URL);let c;o&&(c=o.baseOptions);const f=Object.assign(Object.assign({method:"POST"},c),u),m={},y={};m["Content-Type"]="application/json",r.setSearchParams(d,y);let g=c&&c.headers?c.headers:{};return f.headers=Object.assign(Object.assign(Object.assign({},m),g),u.headers),f.data=r.serializeDataIfNeeded(s,f,o),{url:r.toPathString(d),options:f}}),createEmbedding:(s,u={})=>t(this,void 0,void 0,function*(){r.assertParamExists("createEmbedding","createEmbeddingRequest",s);const a="/embeddings",d=new URL(a,r.DUMMY_BASE_URL);let c;o&&(c=o.baseOptions);const f=Object.assign(Object.assign({method:"POST"},c),u),m={},y={};m["Content-Type"]="application/json",r.setSearchParams(d,y);let g=c&&c.headers?c.headers:{};return f.headers=Object.assign(Object.assign(Object.assign({},m),g),u.headers),f.data=r.serializeDataIfNeeded(s,f,o),{url:r.toPathString(d),options:f}}),createFile:(s,u,a={})=>t(this,void 0,void 0,function*(){r.assertParamExists("createFile","file",s),r.assertParamExists("createFile","purpose",u);const d="/files",c=new URL(d,r.DUMMY_BASE_URL);let f;o&&(f=o.baseOptions);const m=Object.assign(Object.assign({method:"POST"},f),a),y={},g={},P=new(o&&o.formDataCtor||FormData);s!==void 0&&P.append("file",s),u!==void 0&&P.append("purpose",u),y["Content-Type"]="multipart/form-data",r.setSearchParams(c,g);let h=f&&f.headers?f.headers:{};return m.headers=Object.assign(Object.assign(Object.assign(Object.assign({},y),P.getHeaders()),h),a.headers),m.data=P,{url:r.toPathString(c),options:m}}),createFineTune:(s,u={})=>t(this,void 0,void 0,function*(){r.assertParamExists("createFineTune","createFineTuneRequest",s);const a="/fine-tunes",d=new URL(a,r.DUMMY_BASE_URL);let c;o&&(c=o.baseOptions);const f=Object.assign(Object.assign({method:"POST"},c),u),m={},y={};m["Content-Type"]="application/json",r.setSearchParams(d,y);let g=c&&c.headers?c.headers:{};return f.headers=Object.assign(Object.assign(Object.assign({},m),g),u.headers),f.data=r.serializeDataIfNeeded(s,f,o),{url:r.toPathString(d),options:f}}),createImage:(s,u={})=>t(this,void 0,void 0,function*(){r.assertParamExists("createImage","createImageRequest",s);const a="/images/generations",d=new URL(a,r.DUMMY_BASE_URL);let c;o&&(c=o.baseOptions);const f=Object.assign(Object.assign({method:"POST"},c),u),m={},y={};m["Content-Type"]="application/json",r.setSearchParams(d,y);let g=c&&c.headers?c.headers:{};return f.headers=Object.assign(Object.assign(Object.assign({},m),g),u.headers),f.data=r.serializeDataIfNeeded(s,f,o),{url:r.toPathString(d),options:f}}),createImageEdit:(s,u,a,d,c,f,m,y={})=>t(this,void 0,void 0,function*(){r.assertParamExists("createImageEdit","image",s),r.assertParamExists("createImageEdit","mask",u),r.assertParamExists("createImageEdit","prompt",a);const g="/images/edits",P=new URL(g,r.DUMMY_BASE_URL);let h;o&&(h=o.baseOptions);const p=Object.assign(Object.assign({method:"POST"},h),y),v={},S={},E=new(o&&o.formDataCtor||FormData);s!==void 0&&E.append("image",s),u!==void 0&&E.append("mask",u),a!==void 0&&E.append("prompt",a),d!==void 0&&E.append("n",d),c!==void 0&&E.append("size",c),f!==void 0&&E.append("response_format",f),m!==void 0&&E.append("user",m),v["Content-Type"]="multipart/form-data",r.setSearchParams(P,S);let _=h&&h.headers?h.headers:{};return p.headers=Object.assign(Object.assign(Object.assign(Object.assign({},v),E.getHeaders()),_),y.headers),p.data=E,{url:r.toPathString(P),options:p}}),createImageVariation:(s,u,a,d,c,f={})=>t(this,void 0,void 0,function*(){r.assertParamExists("createImageVariation","image",s);const m="/images/variations",y=new URL(m,r.DUMMY_BASE_URL);let g;o&&(g=o.baseOptions);const P=Object.assign(Object.assign({method:"POST"},g),f),h={},p={},v=new(o&&o.formDataCtor||FormData);s!==void 0&&v.append("image",s),u!==void 0&&v.append("n",u),a!==void 0&&v.append("size",a),d!==void 0&&v.append("response_format",d),c!==void 0&&v.append("user",c),h["Content-Type"]="multipart/form-data",r.setSearchParams(y,p);let S=g&&g.headers?g.headers:{};return P.headers=Object.assign(Object.assign(Object.assign(Object.assign({},h),v.getHeaders()),S),f.headers),P.data=v,{url:r.toPathString(y),options:P}}),createModeration:(s,u={})=>t(this,void 0,void 0,function*(){r.assertParamExists("createModeration","createModerationRequest",s);const a="/moderations",d=new URL(a,r.DUMMY_BASE_URL);let c;o&&(c=o.baseOptions);const f=Object.assign(Object.assign({method:"POST"},c),u),m={},y={};m["Content-Type"]="application/json",r.setSearchParams(d,y);let g=c&&c.headers?c.headers:{};return f.headers=Object.assign(Object.assign(Object.assign({},m),g),u.headers),f.data=r.serializeDataIfNeeded(s,f,o),{url:r.toPathString(d),options:f}}),createSearch:(s,u,a={})=>t(this,void 0,void 0,function*(){r.assertParamExists("createSearch","engineId",s),r.assertParamExists("createSearch","createSearchRequest",u);const d="/engines/{engine_id}/search".replace("{engine_id}",encodeURIComponent(String(s))),c=new URL(d,r.DUMMY_BASE_URL);let f;o&&(f=o.baseOptions);const m=Object.assign(Object.assign({method:"POST"},f),a),y={},g={};y["Content-Type"]="application/json",r.setSearchParams(c,g);let P=f&&f.headers?f.headers:{};return m.headers=Object.assign(Object.assign(Object.assign({},y),P),a.headers),m.data=r.serializeDataIfNeeded(u,m,o),{url:r.toPathString(c),options:m}}),deleteFile:(s,u={})=>t(this,void 0,void 0,function*(){r.assertParamExists("deleteFile","fileId",s);const a="/files/{file_id}".replace("{file_id}",encodeURIComponent(String(s))),d=new URL(a,r.DUMMY_BASE_URL);let c;o&&(c=o.baseOptions);const f=Object.assign(Object.assign({method:"DELETE"},c),u),m={},y={};r.setSearchParams(d,y);let g=c&&c.headers?c.headers:{};return f.headers=Object.assign(Object.assign(Object.assign({},m),g),u.headers),{url:r.toPathString(d),options:f}}),deleteModel:(s,u={})=>t(this,void 0,void 0,function*(){r.assertParamExists("deleteModel","model",s);const a="/models/{model}".replace("{model}",encodeURIComponent(String(s))),d=new URL(a,r.DUMMY_BASE_URL);let c;o&&(c=o.baseOptions);const f=Object.assign(Object.assign({method:"DELETE"},c),u),m={},y={};r.setSearchParams(d,y);let g=c&&c.headers?c.headers:{};return f.headers=Object.assign(Object.assign(Object.assign({},m),g),u.headers),{url:r.toPathString(d),options:f}}),downloadFile:(s,u={})=>t(this,void 0,void 0,function*(){r.assertParamExists("downloadFile","fileId",s);const a="/files/{file_id}/content".replace("{file_id}",encodeURIComponent(String(s))),d=new URL(a,r.DUMMY_BASE_URL);let c;o&&(c=o.baseOptions);const f=Object.assign(Object.assign({method:"GET"},c),u),m={},y={};r.setSearchParams(d,y);let g=c&&c.headers?c.headers:{};return f.headers=Object.assign(Object.assign(Object.assign({},m),g),u.headers),{url:r.toPathString(d),options:f}}),listEngines:(s={})=>t(this,void 0,void 0,function*(){const u="/engines",a=new URL(u,r.DUMMY_BASE_URL);let d;o&&(d=o.baseOptions);const c=Object.assign(Object.assign({method:"GET"},d),s),f={},m={};r.setSearchParams(a,m);let y=d&&d.headers?d.headers:{};return c.headers=Object.assign(Object.assign(Object.assign({},f),y),s.headers),{url:r.toPathString(a),options:c}}),listFiles:(s={})=>t(this,void 0,void 0,function*(){const u="/files",a=new URL(u,r.DUMMY_BASE_URL);let d;o&&(d=o.baseOptions);const c=Object.assign(Object.assign({method:"GET"},d),s),f={},m={};r.setSearchParams(a,m);let y=d&&d.headers?d.headers:{};return c.headers=Object.assign(Object.assign(Object.assign({},f),y),s.headers),{url:r.toPathString(a),options:c}}),listFineTuneEvents:(s,u,a={})=>t(this,void 0,void 0,function*(){r.assertParamExists("listFineTuneEvents","fineTuneId",s);const d="/fine-tunes/{fine_tune_id}/events".replace("{fine_tune_id}",encodeURIComponent(String(s))),c=new URL(d,r.DUMMY_BASE_URL);let f;o&&(f=o.baseOptions);const m=Object.assign(Object.assign({method:"GET"},f),a),y={},g={};u!==void 0&&(g.stream=u),r.setSearchParams(c,g);let P=f&&f.headers?f.headers:{};return m.headers=Object.assign(Object.assign(Object.assign({},y),P),a.headers),{url:r.toPathString(c),options:m}}),listFineTunes:(s={})=>t(this,void 0,void 0,function*(){const u="/fine-tunes",a=new URL(u,r.DUMMY_BASE_URL);let d;o&&(d=o.baseOptions);const c=Object.assign(Object.assign({method:"GET"},d),s),f={},m={};r.setSearchParams(a,m);let y=d&&d.headers?d.headers:{};return c.headers=Object.assign(Object.assign(Object.assign({},f),y),s.headers),{url:r.toPathString(a),options:c}}),listModels:(s={})=>t(this,void 0,void 0,function*(){const u="/models",a=new URL(u,r.DUMMY_BASE_URL);let d;o&&(d=o.baseOptions);const c=Object.assign(Object.assign({method:"GET"},d),s),f={},m={};r.setSearchParams(a,m);let y=d&&d.headers?d.headers:{};return c.headers=Object.assign(Object.assign(Object.assign({},f),y),s.headers),{url:r.toPathString(a),options:c}}),retrieveEngine:(s,u={})=>t(this,void 0,void 0,function*(){r.assertParamExists("retrieveEngine","engineId",s);const a="/engines/{engine_id}".replace("{engine_id}",encodeURIComponent(String(s))),d=new URL(a,r.DUMMY_BASE_URL);let c;o&&(c=o.baseOptions);const f=Object.assign(Object.assign({method:"GET"},c),u),m={},y={};r.setSearchParams(d,y);let g=c&&c.headers?c.headers:{};return f.headers=Object.assign(Object.assign(Object.assign({},m),g),u.headers),{url:r.toPathString(d),options:f}}),retrieveFile:(s,u={})=>t(this,void 0,void 0,function*(){r.assertParamExists("retrieveFile","fileId",s);const a="/files/{file_id}".replace("{file_id}",encodeURIComponent(String(s))),d=new URL(a,r.DUMMY_BASE_URL);let c;o&&(c=o.baseOptions);const f=Object.assign(Object.assign({method:"GET"},c),u),m={},y={};r.setSearchParams(d,y);let g=c&&c.headers?c.headers:{};return f.headers=Object.assign(Object.assign(Object.assign({},m),g),u.headers),{url:r.toPathString(d),options:f}}),retrieveFineTune:(s,u={})=>t(this,void 0,void 0,function*(){r.assertParamExists("retrieveFineTune","fineTuneId",s);const a="/fine-tunes/{fine_tune_id}".replace("{fine_tune_id}",encodeURIComponent(String(s))),d=new URL(a,r.DUMMY_BASE_URL);let c;o&&(c=o.baseOptions);const f=Object.assign(Object.assign({method:"GET"},c),u),m={},y={};r.setSearchParams(d,y);let g=c&&c.headers?c.headers:{};return f.headers=Object.assign(Object.assign(Object.assign({},m),g),u.headers),{url:r.toPathString(d),options:f}}),retrieveModel:(s,u={})=>t(this,void 0,void 0,function*(){r.assertParamExists("retrieveModel","model",s);const a="/models/{model}".replace("{model}",encodeURIComponent(String(s))),d=new URL(a,r.DUMMY_BASE_URL);let c;o&&(c=o.baseOptions);const f=Object.assign(Object.assign({method:"GET"},c),u),m={},y={};r.setSearchParams(d,y);let g=c&&c.headers?c.headers:{};return f.headers=Object.assign(Object.assign(Object.assign({},m),g),u.headers),{url:r.toPathString(d),options:f}})}},e.OpenAIApiFp=function(o){const s=e.OpenAIApiAxiosParamCreator(o);return{cancelFineTune(u,a){return t(this,void 0,void 0,function*(){const d=yield s.cancelFineTune(u,a);return r.createRequestFunction(d,n.default,i.BASE_PATH,o)})},createAnswer(u,a){return t(this,void 0,void 0,function*(){const d=yield s.createAnswer(u,a);return r.createRequestFunction(d,n.default,i.BASE_PATH,o)})},createClassification(u,a){return t(this,void 0,void 0,function*(){const d=yield s.createClassification(u,a);return r.createRequestFunction(d,n.default,i.BASE_PATH,o)})},createCompletion(u,a){return t(this,void 0,void 0,function*(){const d=yield s.createCompletion(u,a);return r.createRequestFunction(d,n.default,i.BASE_PATH,o)})},createEdit(u,a){return t(this,void 0,void 0,function*(){const d=yield s.createEdit(u,a);return r.createRequestFunction(d,n.default,i.BASE_PATH,o)})},createEmbedding(u,a){return t(this,void 0,void 0,function*(){const d=yield s.createEmbedding(u,a);return r.createRequestFunction(d,n.default,i.BASE_PATH,o)})},createFile(u,a,d){return t(this,void 0,void 0,function*(){const c=yield s.createFile(u,a,d);return r.createRequestFunction(c,n.default,i.BASE_PATH,o)})},createFineTune(u,a){return t(this,void 0,void 0,function*(){const d=yield s.createFineTune(u,a);return r.createRequestFunction(d,n.default,i.BASE_PATH,o)})},createImage(u,a){return t(this,void 0,void 0,function*(){const d=yield s.createImage(u,a);return r.createRequestFunction(d,n.default,i.BASE_PATH,o)})},createImageEdit(u,a,d,c,f,m,y,g){return t(this,void 0,void 0,function*(){const P=yield s.createImageEdit(u,a,d,c,f,m,y,g);return r.createRequestFunction(P,n.default,i.BASE_PATH,o)})},createImageVariation(u,a,d,c,f,m){return t(this,void 0,void 0,function*(){const y=yield s.createImageVariation(u,a,d,c,f,m);return r.createRequestFunction(y,n.default,i.BASE_PATH,o)})},createModeration(u,a){return t(this,void 0,void 0,function*(){const d=yield s.createModeration(u,a);return r.createRequestFunction(d,n.default,i.BASE_PATH,o)})},createSearch(u,a,d){return t(this,void 0,void 0,function*(){const c=yield s.createSearch(u,a,d);return r.createRequestFunction(c,n.default,i.BASE_PATH,o)})},deleteFile(u,a){return t(this,void 0,void 0,function*(){const d=yield s.deleteFile(u,a);return r.createRequestFunction(d,n.default,i.BASE_PATH,o)})},deleteModel(u,a){return t(this,void 0,void 0,function*(){const d=yield s.deleteModel(u,a);return r.createRequestFunction(d,n.default,i.BASE_PATH,o)})},downloadFile(u,a){return t(this,void 0,void 0,function*(){const d=yield s.downloadFile(u,a);return r.createRequestFunction(d,n.default,i.BASE_PATH,o)})},listEngines(u){return t(this,void 0,void 0,function*(){const a=yield s.listEngines(u);return r.createRequestFunction(a,n.default,i.BASE_PATH,o)})},listFiles(u){return t(this,void 0,void 0,function*(){const a=yield s.listFiles(u);return r.createRequestFunction(a,n.default,i.BASE_PATH,o)})},listFineTuneEvents(u,a,d){return t(this,void 0,void 0,function*(){const c=yield s.listFineTuneEvents(u,a,d);return r.createRequestFunction(c,n.default,i.BASE_PATH,o)})},listFineTunes(u){return t(this,void 0,void 0,function*(){const a=yield s.listFineTunes(u);return r.createRequestFunction(a,n.default,i.BASE_PATH,o)})},listModels(u){return t(this,void 0,void 0,function*(){const a=yield s.listModels(u);return r.createRequestFunction(a,n.default,i.BASE_PATH,o)})},retrieveEngine(u,a){return t(this,void 0,void 0,function*(){const d=yield s.retrieveEngine(u,a);return r.createRequestFunction(d,n.default,i.BASE_PATH,o)})},retrieveFile(u,a){return t(this,void 0,void 0,function*(){const d=yield s.retrieveFile(u,a);return r.createRequestFunction(d,n.default,i.BASE_PATH,o)})},retrieveFineTune(u,a){return t(this,void 0,void 0,function*(){const d=yield s.retrieveFineTune(u,a);return r.createRequestFunction(d,n.default,i.BASE_PATH,o)})},retrieveModel(u,a){return t(this,void 0,void 0,function*(){const d=yield s.retrieveModel(u,a);return r.createRequestFunction(d,n.default,i.BASE_PATH,o)})}}},e.OpenAIApiFactory=function(o,s,u){const a=e.OpenAIApiFp(o);return{cancelFineTune(d,c){return a.cancelFineTune(d,c).then(f=>f(u,s))},createAnswer(d,c){return a.createAnswer(d,c).then(f=>f(u,s))},createClassification(d,c){return a.createClassification(d,c).then(f=>f(u,s))},createCompletion(d,c){return a.createCompletion(d,c).then(f=>f(u,s))},createEdit(d,c){return a.createEdit(d,c).then(f=>f(u,s))},createEmbedding(d,c){return a.createEmbedding(d,c).then(f=>f(u,s))},createFile(d,c,f){return a.createFile(d,c,f).then(m=>m(u,s))},createFineTune(d,c){return a.createFineTune(d,c).then(f=>f(u,s))},createImage(d,c){return a.createImage(d,c).then(f=>f(u,s))},createImageEdit(d,c,f,m,y,g,P,h){return a.createImageEdit(d,c,f,m,y,g,P,h).then(p=>p(u,s))},createImageVariation(d,c,f,m,y,g){return a.createImageVariation(d,c,f,m,y,g).then(P=>P(u,s))},createModeration(d,c){return a.createModeration(d,c).then(f=>f(u,s))},createSearch(d,c,f){return a.createSearch(d,c,f).then(m=>m(u,s))},deleteFile(d,c){return a.deleteFile(d,c).then(f=>f(u,s))},deleteModel(d,c){return a.deleteModel(d,c).then(f=>f(u,s))},downloadFile(d,c){return a.downloadFile(d,c).then(f=>f(u,s))},listEngines(d){return a.listEngines(d).then(c=>c(u,s))},listFiles(d){return a.listFiles(d).then(c=>c(u,s))},listFineTuneEvents(d,c,f){return a.listFineTuneEvents(d,c,f).then(m=>m(u,s))},listFineTunes(d){return a.listFineTunes(d).then(c=>c(u,s))},listModels(d){return a.listModels(d).then(c=>c(u,s))},retrieveEngine(d,c){return a.retrieveEngine(d,c).then(f=>f(u,s))},retrieveFile(d,c){return a.retrieveFile(d,c).then(f=>f(u,s))},retrieveFineTune(d,c){return a.retrieveFineTune(d,c).then(f=>f(u,s))},retrieveModel(d,c){return a.retrieveModel(d,c).then(f=>f(u,s))}}};class l extends i.BaseAPI{cancelFineTune(s,u){return e.OpenAIApiFp(this.configuration).cancelFineTune(s,u).then(a=>a(this.axios,this.basePath))}createAnswer(s,u){return e.OpenAIApiFp(this.configuration).createAnswer(s,u).then(a=>a(this.axios,this.basePath))}createClassification(s,u){return e.OpenAIApiFp(this.configuration).createClassification(s,u).then(a=>a(this.axios,this.basePath))}createCompletion(s,u){return e.OpenAIApiFp(this.configuration).createCompletion(s,u).then(a=>a(this.axios,this.basePath))}createEdit(s,u){return e.OpenAIApiFp(this.configuration).createEdit(s,u).then(a=>a(this.axios,this.basePath))}createEmbedding(s,u){return e.OpenAIApiFp(this.configuration).createEmbedding(s,u).then(a=>a(this.axios,this.basePath))}createFile(s,u,a){return e.OpenAIApiFp(this.configuration).createFile(s,u,a).then(d=>d(this.axios,this.basePath))}createFineTune(s,u){return e.OpenAIApiFp(this.configuration).createFineTune(s,u).then(a=>a(this.axios,this.basePath))}createImage(s,u){return e.OpenAIApiFp(this.configuration).createImage(s,u).then(a=>a(this.axios,this.basePath))}createImageEdit(s,u,a,d,c,f,m,y){return e.OpenAIApiFp(this.configuration).createImageEdit(s,u,a,d,c,f,m,y).then(g=>g(this.axios,this.basePath))}createImageVariation(s,u,a,d,c,f){return e.OpenAIApiFp(this.configuration).createImageVariation(s,u,a,d,c,f).then(m=>m(this.axios,this.basePath))}createModeration(s,u){return e.OpenAIApiFp(this.configuration).createModeration(s,u).then(a=>a(this.axios,this.basePath))}createSearch(s,u,a){return e.OpenAIApiFp(this.configuration).createSearch(s,u,a).then(d=>d(this.axios,this.basePath))}deleteFile(s,u){return e.OpenAIApiFp(this.configuration).deleteFile(s,u).then(a=>a(this.axios,this.basePath))}deleteModel(s,u){return e.OpenAIApiFp(this.configuration).deleteModel(s,u).then(a=>a(this.axios,this.basePath))}downloadFile(s,u){return e.OpenAIApiFp(this.configuration).downloadFile(s,u).then(a=>a(this.axios,this.basePath))}listEngines(s){return e.OpenAIApiFp(this.configuration).listEngines(s).then(u=>u(this.axios,this.basePath))}listFiles(s){return e.OpenAIApiFp(this.configuration).listFiles(s).then(u=>u(this.axios,this.basePath))}listFineTuneEvents(s,u,a){return e.OpenAIApiFp(this.configuration).listFineTuneEvents(s,u,a).then(d=>d(this.axios,this.basePath))}listFineTunes(s){return e.OpenAIApiFp(this.configuration).listFineTunes(s).then(u=>u(this.axios,this.basePath))}listModels(s){return e.OpenAIApiFp(this.configuration).listModels(s).then(u=>u(this.axios,this.basePath))}retrieveEngine(s,u){return e.OpenAIApiFp(this.configuration).retrieveEngine(s,u).then(a=>a(this.axios,this.basePath))}retrieveFile(s,u){return e.OpenAIApiFp(this.configuration).retrieveFile(s,u).then(a=>a(this.axios,this.basePath))}retrieveFineTune(s,u){return e.OpenAIApiFp(this.configuration).retrieveFineTune(s,u).then(a=>a(this.axios,this.basePath))}retrieveModel(s,u){return e.OpenAIApiFp(this.configuration).retrieveModel(s,u).then(a=>a(this.axios,this.basePath))}}e.OpenAIApi=l})(Sd);var Ni={};const Zh="openai",bh="3.1.0",em="Node.js library for the OpenAI API",tm={type:"git",url:"git@github.com:openai/openai-node.git"},nm=["openai","open","ai","gpt-3","gpt3"],rm="OpenAI",im="MIT",lm="./dist/index.js",om="./dist/index.d.ts",sm={build:"tsc --outDir dist/"},um={axios:"^0.26.0","form-data":"^4.0.0"},am={"@types/node":"^12.11.5",typescript:"^3.6.4"},cm={name:Zh,version:bh,description:em,repository:tm,keywords:nm,author:rm,license:im,main:lm,types:om,scripts:sm,dependencies:um,devDependencies:am};var Al,aa;function dm(){return aa||(aa=1,Al=typeof self=="object"?self.FormData:window.FormData),Al}Object.defineProperty(Ni,"__esModule",{value:!0});Ni.Configuration=void 0;const fm=cm;class pm{constructor(t={}){this.apiKey=t.apiKey,this.organization=t.organization,this.username=t.username,this.password=t.password,this.accessToken=t.accessToken,this.basePath=t.basePath,this.baseOptions=t.baseOptions,this.formDataCtor=t.formDataCtor,this.baseOptions||(this.baseOptions={}),this.baseOptions.headers=Object.assign({"User-Agent":`OpenAI/NodeJS/${fm.version}`,Authorization:`Bearer ${this.apiKey}`},this.baseOptions.headers),this.organization&&(this.baseOptions.headers["OpenAI-Organization"]=this.organization),this.formDataCtor||(this.formDataCtor=dm())}isJsonMime(t){const n=new RegExp("^(application/json|[^;/ ]+/[^;/ ]+[+]json)[ ]*(;.*)?$","i");return t!==null&&(n.test(t)||t.toLowerCase()==="application/json-patch+json")}}Ni.Configuration=pm;(function(e){var t=ut&&ut.__createBinding||(Object.create?function(r,i,l,o){o===void 0&&(o=l),Object.defineProperty(r,o,{enumerable:!0,get:function(){return i[l]}})}:function(r,i,l,o){o===void 0&&(o=l),r[o]=i[l]}),n=ut&&ut.__exportStar||function(r,i){for(var l in r)l!=="default"&&!i.hasOwnProperty(l)&&t(i,r,l)};Object.defineProperty(e,"__esModule",{value:!0}),n(Sd,e),n(Ni,e)})(Po);const hm=async(e,t,n)=>{const r={...t},i=r.tokensWanted-e.length;r.tokensWanted=void 0,r.prompt=e;const l=new Po.Configuration({apiKey:{VITE_TEST:"CLIENT_ENV",BASE_URL:"/",MODE:"production",DEV:!1,PROD:!0}.VITE_OPENAI_API_KEY||n}),o=new Po.OpenAIApi(l),s={model:"text-davinci-002",...r,max_tokens:i};console.log("Constructed Config",s);const u=await o.createCompletion(s).catch(a=>{alert(a.message)});if(u){const a=u.data.choices[0].text;return console.log(u.data),a}},mm=(e,t)=>{const[n,r]=$e.exports.useState(()=>{if(typeof window>"u")return t;try{const l=window.localStorage.getItem(e);return l?JSON.parse(l):t}catch(l){return console.log(l),t}});return[n,l=>{try{const o=l instanceof Function?l(n):l;r(o),typeof window<"u"&&window.localStorage.setItem(e,JSON.stringify(o))}catch(o){console.log(o)}}]};var Li={exports:{}},Ii={};/** - * @license React - * react-jsx-runtime.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var vm=$e.exports,ym=Symbol.for("react.element"),gm=Symbol.for("react.fragment"),Sm=Object.prototype.hasOwnProperty,wm=vm.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.ReactCurrentOwner,Em={key:!0,ref:!0,__self:!0,__source:!0};function Ld(e,t,n){var r,i={},l=null,o=null;n!==void 0&&(l=""+n),t.key!==void 0&&(l=""+t.key),t.ref!==void 0&&(o=t.ref);for(r in t)Sm.call(t,r)&&!Em.hasOwnProperty(r)&&(i[r]=t[r]);if(e&&e.defaultProps)for(r in t=e.defaultProps,t)i[r]===void 0&&(i[r]=t[r]);return{$$typeof:ym,type:e,key:l,ref:o,props:i,_owner:wm.current}}Ii.Fragment=gm;Ii.jsx=Ld;Ii.jsxs=Ld;(function(e){e.exports=Ii})(Li);const Om=Li.exports.Fragment,z=Li.exports.jsx,Be=Li.exports.jsxs,km=({min:e=0,max:t=100,value:n=0,onValueChanged:r})=>Be("span",{className:"slider",style:{"--progress":(n-e)/(t-e)*100+"%"},children:[z("span",{className:"slider__progress-bar"}),z("input",{className:"slider__input",type:"range",min:e,max:t,value:n,onChange:i=>{const l=parseInt(i.target.value);r(l)}})]});function Tl({name:e,value:t,min:n,max:r,divider:i,onValueChanged:l}){return z("div",{children:Be("div",{style:{display:"flex",gap:"5px",whiteSpace:"nowrap",justifyContent:"space-between",verticalAlign:"middle"},children:[Be("b",{children:[e,": ",t.toFixed(2)]}),Be("div",{style:{display:"flex",maxWidth:"500px",gap:"10px"},children:[n.toFixed(2),z(km,{onValueChanged:o=>l(o/i),value:t*i,min:n*i,max:r*i}),r.toFixed(2)]})]})})}function Pm(){const[e,t]=mm("anic_gui_openaikey",""),[n,r]=$e.exports.useState("Ein neuronales Netzwerk mit Namen Anic schreibt eine total verr\xFCckte Kolumne f\xFCr eine \xFCberregionale deutsche Zeitung. Sie ist bekannt f\xFCr ihren stilistischen Witz und ihre ungew\xF6hnlichen Blickwinkel. Dies ist die erste Kolumne von Anic und sie wird die Leser*innen vom Hocker hauen."),[i,l]=$e.exports.useState([]),[o,s]=$e.exports.useState(!1),[u,a]=$e.exports.useState(!1),[d,c]=$e.exports.useState({temperature:.9,presence_penalty:1.8,frequency_penalty:1.89,tokensWanted:4096}),f=i.reduce((y,g)=>y+g.length,0);return console.log(d),Be("div",{className:"App",children:[z("h2",{children:"ANIC GUI"}),z("input",{type:"text",value:e,width:100,placeholder:"OPEN-AI Access Key",onChange:y=>t(y.target.value)}),z("br",{}),"Initialer Prompt:",z("br",{}),Be("div",{style:{maxWidth:"660px",margin:"0 auto"},children:[z("textarea",{rows:"6",cols:"80",value:n,onChange:y=>r(y.target.value),style:{maxWidth:"100%"}}),z("br",{}),o?z("button",{disabled:!0,children:"ANIC l\xE4dt..."}):Be(Om,{children:[z("button",{onClick:async()=>{const y=i.length==0?n+` - -`:n+` - -`+i.join(` -`)+` -`;console.log("NewPrompt",y),s(!0);const g=await hm(y,d,e);if(s(!1),console.log("Received Text",g),g){const P=[...i];P.push(g),l(P)}},children:"N\xE4chsten Prompt starten"}),z("button",{onClick:()=>l([]),children:"Zur\xFCcksetzen"}),Be("button",{onClick:()=>a(y=>!y),children:["Einstellungen ",u?"verbergen":"anzeigen"]})]}),z("br",{}),Be("div",{style:{display:u?"block":"none"},children:[z(Tl,{onValueChanged:y=>c({...d,temperature:y}),name:"Temperature",value:d.temperature,divider:100,min:0,max:1}),z(Tl,{onValueChanged:y=>c({...d,presence_penalty:y}),name:"Presence-Penalty",value:d.presence_penalty,divider:100,min:-2,max:2}),z(Tl,{onValueChanged:y=>c({...d,frequency_penalty:y}),name:"Frequency-Penalty",value:d.frequency_penalty,divider:100,min:-2,max:2})]})]}),"Zeichencount: ",f,z("h3",{children:"Resultat"}),z("div",{children:i.map((y,g)=>Be("div",{children:[z("div",{dangerouslySetInnerHTML:{__html:y.replace(/(?:\r\n|\r|\n)/g,"
")}}),g===i.length-1&&z("button",{onClick:()=>l(i.slice(0,-1)),children:"Diesen Teil l\xF6schen"}),z("hr",{})]},y+g))})]})}Rl.createRoot(document.getElementById("root")).render(z(Zd.StrictMode,{children:z(Pm,{})})); diff --git a/spaces/WangZeJun/bloom-820m-chat/app.py b/spaces/WangZeJun/bloom-820m-chat/app.py deleted file mode 100644 index 8ee95c37774a02e250e4a397440b7a88fd90cd38..0000000000000000000000000000000000000000 --- a/spaces/WangZeJun/bloom-820m-chat/app.py +++ /dev/null @@ -1,271 +0,0 @@ -# Copyright 2023 MosaicML spaces authors -# SPDX-License-Identifier: Apache-2.0 -from typing import Optional -import datetime -import os -from threading import Event, Thread -from uuid import uuid4 - -import gradio as gr -import requests -import torch -from transformers import ( - BloomForCausalLM, - BloomTokenizerFast, - StoppingCriteria, - StoppingCriteriaList, - TextIteratorStreamer, -) - - -model_name = "WangZeJun/bloom-820m-chat" -max_new_tokens = 1024 - - -print(f"Starting to load the model {model_name} into memory") - -tok = BloomTokenizerFast.from_pretrained(model_name) -m = BloomForCausalLM.from_pretrained(model_name).eval() - -# tok.convert_tokens_to_ids(["<|im_end|>", "<|endoftext|>"]) -stop_token_ids = [tok.eos_token_id] - -print(f"Successfully loaded the model {model_name} into memory") - - - -class StopOnTokens(StoppingCriteria): - def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: - for stop_id in stop_token_ids: - if input_ids[0][-1] == stop_id: - return True - return False - - -def convert_history_to_text(history): - - user_input = history[-1][0] - - input_pattern = "{}" - text = input_pattern.format(user_input) - return text - - - -def log_conversation(conversation_id, history, messages, generate_kwargs): - logging_url = os.getenv("LOGGING_URL", None) - if logging_url is None: - return - - timestamp = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S") - - data = { - "conversation_id": conversation_id, - "timestamp": timestamp, - "history": history, - "messages": messages, - "generate_kwargs": generate_kwargs, - } - - try: - requests.post(logging_url, json=data) - except requests.exceptions.RequestException as e: - print(f"Error logging conversation: {e}") - - -def user(message, history): - # Append the user's message to the conversation history - return "", history + [[message, ""]] - - -def bot(history, temperature, top_p, top_k, repetition_penalty, conversation_id): - print(f"history: {history}") - # Initialize a StopOnTokens object - stop = StopOnTokens() - - # Construct the input message string for the model by concatenating the current system message and conversation history - messages = convert_history_to_text(history) - - # Tokenize the messages string - input_ids = tok(messages, return_tensors="pt").input_ids - input_ids = input_ids.to(m.device) - streamer = TextIteratorStreamer( - tok, timeout=10.0, skip_prompt=True, skip_special_tokens=True) - generate_kwargs = dict( - input_ids=input_ids, - max_new_tokens=max_new_tokens, - temperature=temperature, - do_sample=temperature > 0.0, - top_p=top_p, - top_k=top_k, - repetition_penalty=repetition_penalty, - streamer=streamer, - stopping_criteria=StoppingCriteriaList([stop]), - ) - - stream_complete = Event() - - def generate_and_signal_complete(): - m.generate(**generate_kwargs) - stream_complete.set() - - def log_after_stream_complete(): - stream_complete.wait() - log_conversation( - conversation_id, - history, - messages, - { - "top_k": top_k, - "top_p": top_p, - "temperature": temperature, - "repetition_penalty": repetition_penalty, - }, - ) - - t1 = Thread(target=generate_and_signal_complete) - t1.start() - - t2 = Thread(target=log_after_stream_complete) - t2.start() - - # Initialize an empty string to store the generated text - partial_text = "" - for new_text in streamer: - partial_text += new_text - history[-1][1] = partial_text - yield history - - -def get_uuid(): - return str(uuid4()) - - -with gr.Blocks( - theme=gr.themes.Soft(), - css=".disclaimer {font-variant-caps: all-small-caps;}", -) as demo: - conversation_id = gr.State(get_uuid) - gr.Markdown( - """ - 基于 bloom-1b1 的 AI 助手 - 模型: https://huggingface.co/WangZeJun/bloom-820m-chat - """ - ) - chatbot = gr.Chatbot().style(height=500) - with gr.Row(): - with gr.Column(): - msg = gr.Textbox( - label="Chat Message Box", - placeholder="Chat Message Box", - show_label=False, - ).style(container=False) - with gr.Column(): - with gr.Row(): - submit = gr.Button("Submit") - stop = gr.Button("Stop") - clear = gr.Button("Clear") - with gr.Row(): - with gr.Accordion("Advanced Options:", open=False): - with gr.Row(): - with gr.Column(): - with gr.Row(): - temperature = gr.Slider( - label="Temperature", - value=0.1, - minimum=0.0, - maximum=1.0, - step=0.1, - interactive=True, - info="Higher values produce more diverse outputs", - ) - with gr.Column(): - with gr.Row(): - top_p = gr.Slider( - label="Top-p (nucleus sampling)", - value=1.0, - minimum=0.0, - maximum=1, - step=0.01, - interactive=True, - info=( - "Sample from the smallest possible set of tokens whose cumulative probability " - "exceeds top_p. Set to 1 to disable and sample from all tokens." - ), - ) - with gr.Column(): - with gr.Row(): - top_k = gr.Slider( - label="Top-k", - value=0, - minimum=0.0, - maximum=200, - step=1, - interactive=True, - info="Sample from a shortlist of top-k tokens — 0 to disable and sample from all tokens.", - ) - with gr.Column(): - with gr.Row(): - repetition_penalty = gr.Slider( - label="Repetition Penalty", - value=1.2, - minimum=1.0, - maximum=2.0, - step=0.1, - interactive=True, - info="Penalize repetition — 1.0 to disable.", - ) - # with gr.Row(): - # gr.Markdown( - # "demo 2", - # elem_classes=["disclaimer"], - # ) - - submit_event = msg.submit( - fn=user, - inputs=[msg, chatbot], - outputs=[msg, chatbot], - queue=False, - ).then( - fn=bot, - inputs=[ - chatbot, - temperature, - top_p, - top_k, - repetition_penalty, - conversation_id, - ], - outputs=chatbot, - queue=True, - ) - submit_click_event = submit.click( - fn=user, - inputs=[msg, chatbot], - outputs=[msg, chatbot], - queue=False, - ).then( - fn=bot, - inputs=[ - chatbot, - temperature, - top_p, - top_k, - repetition_penalty, - conversation_id, - ], - outputs=chatbot, - queue=True, - ) - stop.click( - fn=None, - inputs=None, - outputs=None, - cancels=[submit_event, submit_click_event], - queue=False, - ) - clear.click(lambda: None, None, chatbot, queue=False) - -demo.queue(max_size=128, concurrency_count=2) -demo.launch() - diff --git a/spaces/Wanlau/sovits-4.0_datealive/vdecoder/hifigan/env.py b/spaces/Wanlau/sovits-4.0_datealive/vdecoder/hifigan/env.py deleted file mode 100644 index 2bdbc95d4f7a8bad8fd4f5eef657e2b51d946056..0000000000000000000000000000000000000000 --- a/spaces/Wanlau/sovits-4.0_datealive/vdecoder/hifigan/env.py +++ /dev/null @@ -1,15 +0,0 @@ -import os -import shutil - - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self - - -def build_env(config, config_name, path): - t_path = os.path.join(path, config_name) - if config != t_path: - os.makedirs(path, exist_ok=True) - shutil.copyfile(config, os.path.join(path, config_name)) diff --git a/spaces/Warlord-K/TryOn/utils/scraper.py b/spaces/Warlord-K/TryOn/utils/scraper.py deleted file mode 100644 index 26531e5a3f2aa92385290e6c3a58a569f5a65c8a..0000000000000000000000000000000000000000 --- a/spaces/Warlord-K/TryOn/utils/scraper.py +++ /dev/null @@ -1,60 +0,0 @@ -import requests, json -from bs4 import BeautifulSoup -from selenium import webdriver -from selenium.webdriver.chrome.options import Options - - -def extract_link_flipkart(url): - r = requests.get(url) - soup = BeautifulSoup(r.content, "html5lib") - return soup.find_all("img", {"class": "_2r_T1I _396QI4"})[0]["src"] - - -def extract_link_myntra(url): - headers = { - "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36" - } - - s = requests.Session() - res = s.get(url, headers=headers, verify=False) - - soup = BeautifulSoup(res.text, "lxml") - - script = None - for s in soup.find_all("script"): - if "pdpData" in s.text: - script = s.get_text(strip=True) - break - data = json.loads(script[script.index("{") :]) - try: - link = data["pdpData"]["colours"][0]["image"] - except TypeError as e: - link = data["pdpData"]["media"]["albums"][0]["images"][0]["imageURL"] - return link - - -def extract_link_amazon( - url, DRIVER_PATH="E:\Setups\chromedriver_win32\chromedriver.exe" -): - options = Options() - options.headless = True - options.add_argument("--window-size=1920,1200") - try: - driver = webdriver.Chrome("chromedriver", options=options) - except Exception as e: - driver = webdriver.Chrome(options=options, executable_path=DRIVER_PATH) - driver.get(url) - soup = BeautifulSoup(driver.page_source, "html5lib") - return soup.findAll("img", {"class": "a-dynamic-image a-stretch-horizontal"})[0][ - "src" - ] - - -def extract_link(url): - if "flipkart" in url: - return extract_link_flipkart(url) - if "myntra" in url: - return extract_link_myntra(url) - if "amazon" in url and "media" not in url: - return extract_link_amazon(url) - return None diff --git a/spaces/Wazzzabeee/image-video-colorization/models/deep_colorization/colorizers/base_color.py b/spaces/Wazzzabeee/image-video-colorization/models/deep_colorization/colorizers/base_color.py deleted file mode 100644 index 00beb39e9f6f73b06ebea0314fc23a0bc75f23b7..0000000000000000000000000000000000000000 --- a/spaces/Wazzzabeee/image-video-colorization/models/deep_colorization/colorizers/base_color.py +++ /dev/null @@ -1,24 +0,0 @@ - -import torch -from torch import nn - -class BaseColor(nn.Module): - def __init__(self): - super(BaseColor, self).__init__() - - self.l_cent = 50. - self.l_norm = 100. - self.ab_norm = 110. - - def normalize_l(self, in_l): - return (in_l-self.l_cent)/self.l_norm - - def unnormalize_l(self, in_l): - return in_l*self.l_norm + self.l_cent - - def normalize_ab(self, in_ab): - return in_ab/self.ab_norm - - def unnormalize_ab(self, in_ab): - return in_ab*self.ab_norm - diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/common_utils/__init__.py b/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/common_utils/__init__.py deleted file mode 100644 index 74ffcfef96fec35c99b2a1a053a61f44f7a8bbe9..0000000000000000000000000000000000000000 --- a/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/common_utils/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .temp_utils import TempDirMixin -from .wav_utils import get_batch_white_noise, get_white_noise, save_wav diff --git a/spaces/Xenova/semantic-image-search-client/_next/static/chunks/b2db8554.501a8fbaf2ca19ba.js b/spaces/Xenova/semantic-image-search-client/_next/static/chunks/b2db8554.501a8fbaf2ca19ba.js deleted file mode 100644 index 23247e60e0aeae7687578980ee84da1c07329d98..0000000000000000000000000000000000000000 --- a/spaces/Xenova/semantic-image-search-client/_next/static/chunks/b2db8554.501a8fbaf2ca19ba.js +++ /dev/null @@ -1,1679 +0,0 @@ -(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[15],{2018:function(module,__unused_webpack_exports,__webpack_require__){var process=__webpack_require__(2601);/*! -* ONNX Runtime Web v1.14.0 -* Copyright (c) Microsoft Corporation. All rights reserved. -* Licensed under the MIT License. -*/!function(tr,tn){module.exports=tn(__webpack_require__(7731))}(self,__WEBPACK_EXTERNAL_MODULE__1670__=>(()=>{var __webpack_modules__={3474:(tr,tn,ti)=>{var to,ta=(to=(to="undefined"!=typeof document&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(tr){function tn(){return tF.buffer!=tL&&tX(tF.buffer),tR}function ta(){return tF.buffer!=tL&&tX(tF.buffer),tj}function ts(){return tF.buffer!=tL&&tX(tF.buffer),tM}function tu(){return tF.buffer!=tL&&tX(tF.buffer),tU}function tl(){return tF.buffer!=tL&&tX(tF.buffer),tV}tr=tr||{},tc||(tc=void 0!==tr?tr:{}),tc.ready=new Promise(function(tr,tn){tp=tr,tf=tn});var tc,tp,tf,td,th,tg,tb,tm,ty,t_=Object.assign({},tc),tv="./this.program",tx=(tr,tn)=>{throw tn},tw="object"==typeof window,tT="function"==typeof importScripts,tS="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node,tO=tc.ENVIRONMENT_IS_PTHREAD||!1,tA="";function tE(tr){return tc.locateFile?tc.locateFile(tr,tA):tA+tr}if(tS){let tr;tA=tT?ti(908).dirname(tA)+"/":"//",ty=()=>{tm||(tb=ti(1384),tm=ti(908))},td=function(tr,tn){return ty(),tr=tm.normalize(tr),tb.readFileSync(tr,tn?void 0:"utf8")},tg=tr=>((tr=td(tr,!0)).buffer||(tr=new Uint8Array(tr)),tr),th=(tr,tn,ti)=>{ty(),tr=tm.normalize(tr),tb.readFile(tr,function(tr,to){tr?ti(tr):tn(to.buffer)})},1{if(t1())throw process.exitCode=tr,tn;tn instanceof en||tk("exiting due to exception: "+tn),process.exit(tr)},tc.inspect=function(){return"[Emscripten Module object]"};try{tr=ti(9925)}catch(tr){throw console.error('The "worker_threads" module is not supported in this node.js build - perhaps a newer version is needed?'),tr}ti.g.Worker=tr.Worker}else(tw||tT)&&(tT?tA=self.location.href:"undefined"!=typeof document&&document.currentScript&&(tA=document.currentScript.src),to&&(tA=to),tA=0!==tA.indexOf("blob:")?tA.substr(0,tA.replace(/[?#].*/,"").lastIndexOf("/")+1):"",tS||(td=tr=>{var tn=new XMLHttpRequest;return tn.open("GET",tr,!1),tn.send(null),tn.responseText},tT&&(tg=tr=>{var tn=new XMLHttpRequest;return tn.open("GET",tr,!1),tn.responseType="arraybuffer",tn.send(null),new Uint8Array(tn.response)}),th=(tr,tn,ti)=>{var to=new XMLHttpRequest;to.open("GET",tr,!0),to.responseType="arraybuffer",to.onload=()=>{200==to.status||0==to.status&&to.response?tn(to.response):ti()},to.onerror=ti,to.send(null)}));tS&&"undefined"==typeof performance&&(ti.g.performance=ti(6953).performance);var tI=console.log.bind(console),tP=console.warn.bind(console);tS&&(ty(),tI=tr=>tb.writeSync(1,tr+"\n"),tP=tr=>tb.writeSync(2,tr+"\n"));var tD,t$=tc.print||tI,tk=tc.printErr||tP;Object.assign(tc,t_),t_=null,tc.thisProgram&&(tv=tc.thisProgram),tc.quit&&(tx=tc.quit),tc.wasmBinary&&(tD=tc.wasmBinary);var tC=tc.noExitRuntime||!1;"object"!=typeof WebAssembly&&t5("no native wasm support detected");var tF,tN,tL,tR,tj,tM,tU,tV,tB=!1,tz="undefined"!=typeof TextDecoder?new TextDecoder("utf8"):void 0;function tG(tr,tn,ti){var to=(tn>>>=0)+ti;for(ti=tn;tr[ti]&&!(ti>=to);)++ti;if(16(ta=224==(240&ta)?(15&ta)<<12|ts<<6|tu:(7&ta)<<18|ts<<12|tu<<6|63&tr[tn++])?to+=String.fromCharCode(ta):(ta-=65536,to+=String.fromCharCode(55296|ta>>10,56320|1023&ta))}}else to+=String.fromCharCode(ta)}return to}function tH(tr,tn){return(tr>>>=0)?tG(ta(),tr,tn):""}function tW(tr,tn,ti,to){if(!(0>>=0;to=ti+to-1;for(var ts=0;ts=tu&&(tu=65536+((1023&tu)<<10)|1023&tr.charCodeAt(++ts)),127>=tu){if(ti>=to)break;tn[ti++>>>0]=tu}else{if(2047>=tu){if(ti+1>=to)break;tn[ti++>>>0]=192|tu>>6}else{if(65535>=tu){if(ti+2>=to)break;tn[ti++>>>0]=224|tu>>12}else{if(ti+3>=to)break;tn[ti++>>>0]=240|tu>>18,tn[ti++>>>0]=128|tu>>12&63}tn[ti++>>>0]=128|tu>>6&63}tn[ti++>>>0]=128|63&tu}}return tn[ti>>>0]=0,ti-ta}function tq(tr){for(var tn=0,ti=0;ti=to?tn++:2047>=to?tn+=2:55296<=to&&57343>=to?(tn+=4,++ti):tn+=3}return tn}function tX(tr){tL=tr,tc.HEAP8=tR=new Int8Array(tr),tc.HEAP16=new Int16Array(tr),tc.HEAP32=tM=new Int32Array(tr),tc.HEAPU8=tj=new Uint8Array(tr),tc.HEAPU16=new Uint16Array(tr),tc.HEAPU32=tU=new Uint32Array(tr),tc.HEAPF32=new Float32Array(tr),tc.HEAPF64=tV=new Float64Array(tr)}tO&&(tL=tc.buffer);var tY=tc.INITIAL_MEMORY||16777216;if(tO)tF=tc.wasmMemory,tL=tc.buffer;else if(tc.wasmMemory)tF=tc.wasmMemory;else if(!((tF=new WebAssembly.Memory({initial:tY/65536,maximum:65536,shared:!0})).buffer instanceof SharedArrayBuffer))throw tk("requested a shared WebAssembly.Memory but the returned buffer is not a SharedArrayBuffer, indicating that while the browser has SharedArrayBuffer it does not have WebAssembly threads support - you may need to set a flag"),tS&&console.log("(on node you may need: --experimental-wasm-threads --experimental-wasm-bulk-memory and also use a recent version)"),Error("bad memory");tF&&(tL=tF.buffer),tY=tL.byteLength,tX(tL);var tK,tZ=[],tJ=[],tQ=[],t0=[];function t1(){return tC||!1}function t2(){var tr=tc.preRun.shift();tZ.unshift(tr)}var t3,t4=0,t6=null,t8=null;function t5(tr){throw tO?postMessage({cmd:"onAbort",arg:tr}):tc.onAbort&&tc.onAbort(tr),tk(tr="Aborted("+tr+")"),tB=!0,tf(tr=new WebAssembly.RuntimeError(tr+". Build with -sASSERTIONS for more info.")),tr}function t7(){return t3.startsWith("data:application/octet-stream;base64,")}function t9(){var tr=t3;try{if(tr==t3&&tD)return new Uint8Array(tD);if(tg)return tg(tr);throw"both async and sync fetching of the wasm failed"}catch(tr){t5(tr)}}t3="ort-wasm-threaded.wasm",t7()||(t3=tE(t3));var er={};function en(tr){this.name="ExitStatus",this.message="Program terminated with exit("+tr+")",this.status=tr}function ei(tr){(tr=eu.Vb[tr])||t5(),eu.mc(tr)}function eo(tr){var tn=eu.Cc();if(!tn)return 6;eu.ac.push(tn),eu.Vb[tr.Ub]=tn,tn.Ub=tr.Ub;var ti={cmd:"run",start_routine:tr.Ic,arg:tr.zc,pthread_ptr:tr.Ub};return tn.$b=()=>{ti.time=performance.now(),tn.postMessage(ti,tr.Nc)},tn.loaded&&(tn.$b(),delete tn.$b),0}function ea(tr){if(tO)return eB(1,1,tr);t1()||(eu.oc(),tc.onExit&&tc.onExit(tr),tB=!0),tx(tr,new en(tr))}function es(tr,tn){if(!tn&&tO)throw ep(tr),"unwind";t1()||tO||(ri(),el(tQ),rn(0),eJ[1].length&&eQ(1,10),eJ[2].length&&eQ(2,10),eu.oc()),ea(tr)}var eu={Yb:[],ac:[],qc:[],Vb:{},fc:function(){tO&&eu.Ec()},Pc:function(){},Ec:function(){eu.receiveObjectTransfer=eu.Gc,eu.threadInitTLS=eu.pc,eu.setExitStatus=eu.nc,tC=!1},nc:function(){},oc:function(){for(var tr of Object.values(eu.Vb))eu.mc(tr);for(tr of eu.Yb)tr.terminate();eu.Yb=[]},mc:function(tr){var tn=tr.Ub;delete eu.Vb[tn],eu.Yb.push(tr),eu.ac.splice(eu.ac.indexOf(tr),1),tr.Ub=0,rl(tn)},Gc:function(){},pc:function(){eu.qc.forEach(tr=>tr())},Fc:function(tr,tn){tr.onmessage=ti=>{var to=(ti=ti.data).cmd;if(tr.Ub&&(eu.Bc=tr.Ub),ti.targetThread&&ti.targetThread!=e7()){var ta=eu.Vb[ti.Qc];ta?ta.postMessage(ti,ti.transferList):tk('Internal error! Worker sent a message "'+to+'" to target pthread '+ti.targetThread+", but that thread no longer exists!")}else"processProxyingQueue"===to?eL(ti.queue):"spawnThread"===to?eo(ti):"cleanupThread"===to?ei(ti.thread):"killThread"===to?(ti=ti.thread,to=eu.Vb[ti],delete eu.Vb[ti],to.terminate(),rl(ti),eu.ac.splice(eu.ac.indexOf(to),1),to.Ub=0):"cancelThread"===to?eu.Vb[ti.thread].postMessage({cmd:"cancel"}):"loaded"===to?(tr.loaded=!0,tn&&tn(tr),tr.$b&&(tr.$b(),delete tr.$b)):"print"===to?t$("Thread "+ti.threadId+": "+ti.text):"printErr"===to?tk("Thread "+ti.threadId+": "+ti.text):"alert"===to?alert("Thread "+ti.threadId+": "+ti.text):"setimmediate"===ti.target?tr.postMessage(ti):"onAbort"===to?tc.onAbort&&tc.onAbort(ti.arg):to&&tk("worker sent an unknown command "+to);eu.Bc=void 0},tr.onerror=tr=>{throw tk("worker sent an error! "+tr.filename+":"+tr.lineno+": "+tr.message),tr},tS&&(tr.on("message",function(tn){tr.onmessage({data:tn})}),tr.on("error",function(tn){tr.onerror(tn)}),tr.on("detachedExit",function(){})),tr.postMessage({cmd:"load",urlOrBlob:tc.mainScriptUrlOrBlob||to,wasmMemory:tF,wasmModule:tN})},yc:function(){var tr=tE("ort-wasm-threaded.worker.js");eu.Yb.push(new Worker(tr))},Cc:function(){return 0==eu.Yb.length&&(eu.yc(),eu.Fc(eu.Yb[0])),eu.Yb.pop()}};function el(tr){for(;0>2>>>0];rf(tn,tn-(tr=ts()[tr+48>>2>>>0])),rh(tn)};var ef=[];function ed(tr){var tn=ef[tr];return tn||(tr>=ef.length&&(ef.length=tr+1),ef[tr]=tn=tK.get(tr)),tn}tc.invokeEntryPoint=function(tr,tn){tr=ed(tr)(tn),t1()?eu.nc(tr):rc(tr)};var eh,eg,eb=[],em=0,ey=0;function e_(tr){this.Zb=tr,this.Sb=tr-24,this.xc=function(tr){tu()[this.Sb+4>>2>>>0]=tr},this.bc=function(){return tu()[this.Sb+4>>2>>>0]},this.wc=function(tr){tu()[this.Sb+8>>2>>>0]=tr},this.Dc=function(){return tu()[this.Sb+8>>2>>>0]},this.rc=function(){ts()[this.Sb>>2>>>0]=0},this.hc=function(tr){tr=tr?1:0,tn()[this.Sb+12>>0>>>0]=tr},this.uc=function(){return 0!=tn()[this.Sb+12>>0>>>0]},this.ic=function(tr){tr=tr?1:0,tn()[this.Sb+13>>0>>>0]=tr},this.kc=function(){return 0!=tn()[this.Sb+13>>0>>>0]},this.fc=function(tr,tn){this.cc(0),this.xc(tr),this.wc(tn),this.rc(),this.hc(!1),this.ic(!1)},this.sc=function(){Atomics.add(ts(),this.Sb>>2,1)},this.Hc=function(){return 1===Atomics.sub(ts(),this.Sb>>2,1)},this.cc=function(tr){tu()[this.Sb+16>>2>>>0]=tr},this.tc=function(){return tu()[this.Sb+16>>2>>>0]},this.vc=function(){if(rm(this.bc()))return tu()[this.Zb>>2>>>0];var tr=this.tc();return 0!==tr?tr:this.Zb}}function ev(tr){return rr(new e_(tr).Sb)}function ex(tr,tn,ti,to){return tO?eB(3,1,tr,tn,ti,to):ew(tr,tn,ti,to)}function ew(tr,tn,ti,to){if("undefined"==typeof SharedArrayBuffer)return tk("Current environment does not support SharedArrayBuffer, pthreads are not available!"),6;var ta=[];return tO&&0===ta.length?ex(tr,tn,ti,to):(tr={Ic:ti,Ub:tr,zc:to,Nc:ta},tO?(tr.Oc="spawnThread",postMessage(tr,ta),0):eo(tr))}function eT(tr,tn,ti){return tO?eB(4,1,tr,tn,ti):0}function eS(tr,tn){if(tO)return eB(5,1,tr,tn)}function eO(tr,tn){if(tO)return eB(6,1,tr,tn)}function eA(tr,tn,ti){if(tO)return eB(7,1,tr,tn,ti)}function eE(tr,tn,ti){return tO?eB(8,1,tr,tn,ti):0}function eI(tr,tn){if(tO)return eB(9,1,tr,tn)}function eP(tr,tn,ti){if(tO)return eB(10,1,tr,tn,ti)}function eD(tr,tn,ti,to){if(tO)return eB(11,1,tr,tn,ti,to)}function e$(tr,tn,ti,to){if(tO)return eB(12,1,tr,tn,ti,to)}function ek(tr,tn,ti,to){if(tO)return eB(13,1,tr,tn,ti,to)}function eC(tr){if(tO)return eB(14,1,tr)}function eF(tr,tn){if(tO)return eB(15,1,tr,tn)}function eN(tr,tn,ti){if(tO)return eB(16,1,tr,tn,ti)}function eL(tr){Atomics.store(ts(),tr>>2,1),e7()&&ru(tr),Atomics.compareExchange(ts(),tr>>2,1,0)}function eR(tr){return tu()[tr>>>2]+4294967296*ts()[tr+4>>>2]}function ej(tr,tn,ti,to,ta,ts){return tO?eB(17,1,tr,tn,ti,to,ta,ts):-52}function eM(tr,tn,ti,to,ta,ts){if(tO)return eB(18,1,tr,tn,ti,to,ta,ts)}function eU(tr){var ti=tq(tr)+1,to=e9(ti);return to&&tW(tr,tn(),to,ti),to}function eV(tr,tn,ti){function to(tr){return(tr=tr.toTimeString().match(/\(([A-Za-z ]+)\)$/))?tr[1]:"GMT"}if(tO)return eB(19,1,tr,tn,ti);var ta=(new Date).getFullYear(),tl=new Date(ta,0,1),tc=new Date(ta,6,1);ta=tl.getTimezoneOffset();var tp=tc.getTimezoneOffset(),tf=Math.max(ta,tp);ts()[tr>>2>>>0]=60*tf,ts()[tn>>2>>>0]=Number(ta!=tp),tr=to(tl),tn=to(tc),tr=eU(tr),tn=eU(tn),tp>2>>>0]=tr,tu()[ti+4>>2>>>0]=tn):(tu()[ti>>2>>>0]=tn,tu()[ti+4>>2>>>0]=tr)}function eB(tr,tn){var ti=arguments.length-2,to=arguments;return ec(()=>{for(var ta=rg(8*ti),ts=ta>>3,tu=0;tu>>0]=tc}return rs(tr,ti,ta,tn)})}tc.executeNotifiedProxyingQueue=eL,eg=tS?()=>{var tr=process.hrtime();return 1e3*tr[0]+tr[1]/1e6}:tO?()=>performance.now()-tc.__performance_now_clock_drift:()=>performance.now();var ez,eG=[],eH={};function eW(){if(!ez){var tr,tn={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:("object"==typeof navigator&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:tv||"./this.program"};for(tr in eH)void 0===eH[tr]?delete tn[tr]:tn[tr]=eH[tr];var ti=[];for(tr in tn)ti.push(tr+"="+tn[tr]);ez=ti}return ez}function eq(tr,ti){if(tO)return eB(20,1,tr,ti);var to=0;return eW().forEach(function(ta,ts){var tl=ti+to;for(ts=tu()[tr+4*ts>>2>>>0]=tl,tl=0;tl>0>>>0]=ta.charCodeAt(tl);tn()[ts>>0>>>0]=0,to+=ta.length+1}),0}function eX(tr,tn){if(tO)return eB(21,1,tr,tn);var ti=eW();tu()[tr>>2>>>0]=ti.length;var to=0;return ti.forEach(function(tr){to+=tr.length+1}),tu()[tn>>2>>>0]=to,0}function eY(tr){return tO?eB(22,1,tr):52}function eK(tr,tn,ti,to){return tO?eB(23,1,tr,tn,ti,to):52}function eZ(tr,tn,ti,to,ta){return tO?eB(24,1,tr,tn,ti,to,ta):70}var eJ=[null,[],[]];function eQ(tr,tn){var ti=eJ[tr];0===tn||10===tn?((1===tr?t$:tk)(tG(ti,0)),ti.length=0):ti.push(tn)}function e0(tr,tn,ti,to){if(tO)return eB(25,1,tr,tn,ti,to);for(var ts=0,tl=0;tl>2>>>0],tp=tu()[tn+4>>2>>>0];tn+=8;for(var tf=0;tf>>0]);ts+=tp}return tu()[to>>2>>>0]=ts,0}var e1=0;function e2(tr){return 0==tr%4&&(0!=tr%100||0==tr%400)}var e3=[31,29,31,30,31,30,31,31,30,31,30,31],e4=[31,28,31,30,31,30,31,31,30,31,30,31];function e6(tr,ti,to,ta){function tu(tr,tn,ti){for(tr="number"==typeof tr?tr.toString():tr||"";tr.lengthtr?-1:0to-tr.getDate())){tr.setDate(tr.getDate()+tn);break}tn-=to-tr.getDate()+1,tr.setDate(1),11>ti?tr.setMonth(ti+1):(tr.setMonth(0),tr.setFullYear(tr.getFullYear()+1))}return ti=new Date(tr.getFullYear()+1,0,4),tn=tp(new Date(tr.getFullYear(),0,4)),ti=tp(ti),0>=tc(tn,tr)?0>=tc(ti,tr)?tr.getFullYear()+1:tr.getFullYear():tr.getFullYear()-1}var td=ts()[ta+40>>2>>>0];for(var th in ta={Lc:ts()[ta>>2>>>0],Kc:ts()[ta+4>>2>>>0],dc:ts()[ta+8>>2>>>0],jc:ts()[ta+12>>2>>>0],ec:ts()[ta+16>>2>>>0],Xb:ts()[ta+20>>2>>>0],Tb:ts()[ta+24>>2>>>0],Wb:ts()[ta+28>>2>>>0],Rc:ts()[ta+32>>2>>>0],Jc:ts()[ta+36>>2>>>0],Mc:td?tH(td):""},to=tH(to),td={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})to=to.replace(RegExp(th,"g"),td[th]);var tg="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),tb="January February March April May June July August September October November December".split(" ");for(th in td={"%a":function(tr){return tg[tr.Tb].substring(0,3)},"%A":function(tr){return tg[tr.Tb]},"%b":function(tr){return tb[tr.ec].substring(0,3)},"%B":function(tr){return tb[tr.ec]},"%C":function(tr){return tl((tr.Xb+1900)/100|0,2)},"%d":function(tr){return tl(tr.jc,2)},"%e":function(tr){return tu(tr.jc,2," ")},"%g":function(tr){return tf(tr).toString().substring(2)},"%G":function(tr){return tf(tr)},"%H":function(tr){return tl(tr.dc,2)},"%I":function(tr){return 0==(tr=tr.dc)?tr=12:12tr.dc?"AM":"PM"},"%S":function(tr){return tl(tr.Lc,2)},"%t":function(){return" "},"%u":function(tr){return tr.Tb||7},"%U":function(tr){return tl(Math.floor((tr.Wb+7-tr.Tb)/7),2)},"%V":function(tr){var tn=Math.floor((tr.Wb+7-(tr.Tb+6)%7)/7);if(2>=(tr.Tb+371-tr.Wb-2)%7&&tn++,tn)53==tn&&(4==(ti=(tr.Tb+371-tr.Wb)%7)||3==ti&&e2(tr.Xb)||(tn=1));else{tn=52;var ti=(tr.Tb+7-tr.Wb-1)%7;(4==ti||5==ti&&e2(tr.Xb%400-1))&&tn++}return tl(tn,2)},"%w":function(tr){return tr.Tb},"%W":function(tr){return tl(Math.floor((tr.Wb+7-(tr.Tb+6)%7)/7),2)},"%y":function(tr){return(tr.Xb+1900).toString().substring(2)},"%Y":function(tr){return tr.Xb+1900},"%z":function(tr){var tn=0<=(tr=tr.Jc);return(tn?"+":"-")+String("0000"+((tr=Math.abs(tr)/60)/60*100+tr%60)).slice(-4)},"%Z":function(tr){return tr.Mc},"%%":function(){return"%"}},to=to.replace(/%%/g,"\x00\x00"),td)to.includes(th)&&(to=to.replace(RegExp(th,"g"),td[th](ta)));return(th=function(tr){var tn=Array(tq(tr)+1);return tW(tr,tn,0,tn.length),tn}(to=to.replace(/\0\0/g,"%"))).length>ti?0:(function(tr,ti){tn().set(tr,ti>>>0)}(th,tr),th.length-1)}eu.fc();var e8=[null,ea,ep,ex,eT,eS,eO,eA,eE,eI,eP,eD,e$,ek,eC,eF,eN,ej,eM,eV,eq,eX,eY,eK,eZ,e0],e5={b:function(tr){return e9(tr+24)+24},n:function(tr){return(tr=new e_(tr)).uc()||(tr.hc(!0),em--),tr.ic(!1),eb.push(tr),tr.sc(),tr.vc()},ma:function(tr){throw tk("Unexpected exception thrown, this is not properly supported - aborting"),tB=!0,tr},x:function(){rp(0);var tr=eb.pop();if(tr.Hc()&&!tr.kc()){var tn=tr.Dc();tn&&ed(tn)(tr.Zb),ev(tr.Zb)}ey=0},e:function(){var tr=ey;if(!tr)return e1=0;var tn=new e_(tr);tn.cc(tr);var ti=tn.bc();if(!ti)return e1=0,tr;for(var to=Array.prototype.slice.call(arguments),ta=0;taeL(to));else if(tO)postMessage({targetThread:tr,cmd:"processProxyingQueue",queue:to});else{if(!(tr=eu.Vb[tr]))return;tr.postMessage({cmd:"processProxyingQueue",queue:to})}return 1},Ea:function(){return -1},Pa:function(tr,tn){tr=new Date(1e3*eR(tr)),ts()[tn>>2>>>0]=tr.getUTCSeconds(),ts()[tn+4>>2>>>0]=tr.getUTCMinutes(),ts()[tn+8>>2>>>0]=tr.getUTCHours(),ts()[tn+12>>2>>>0]=tr.getUTCDate(),ts()[tn+16>>2>>>0]=tr.getUTCMonth(),ts()[tn+20>>2>>>0]=tr.getUTCFullYear()-1900,ts()[tn+24>>2>>>0]=tr.getUTCDay(),tr=(tr.getTime()-Date.UTC(tr.getUTCFullYear(),0,1,0,0,0,0))/864e5|0,ts()[tn+28>>2>>>0]=tr},Qa:function(tr,tn){tr=new Date(1e3*eR(tr)),ts()[tn>>2>>>0]=tr.getSeconds(),ts()[tn+4>>2>>>0]=tr.getMinutes(),ts()[tn+8>>2>>>0]=tr.getHours(),ts()[tn+12>>2>>>0]=tr.getDate(),ts()[tn+16>>2>>>0]=tr.getMonth(),ts()[tn+20>>2>>>0]=tr.getFullYear()-1900,ts()[tn+24>>2>>>0]=tr.getDay();var ti=new Date(tr.getFullYear(),0,1),to=(tr.getTime()-ti.getTime())/864e5|0;ts()[tn+28>>2>>>0]=to,ts()[tn+36>>2>>>0]=-60*tr.getTimezoneOffset(),tr=0|((to=new Date(tr.getFullYear(),6,1).getTimezoneOffset())!=(ti=ti.getTimezoneOffset())&&tr.getTimezoneOffset()==Math.min(ti,to)),ts()[tn+32>>2>>>0]=tr},Ra:function(tr){var tn=new Date(ts()[tr+20>>2>>>0]+1900,ts()[tr+16>>2>>>0],ts()[tr+12>>2>>>0],ts()[tr+8>>2>>>0],ts()[tr+4>>2>>>0],ts()[tr>>2>>>0],0),ti=ts()[tr+32>>2>>>0],to=tn.getTimezoneOffset(),ta=new Date(tn.getFullYear(),0,1),tu=new Date(tn.getFullYear(),6,1).getTimezoneOffset(),tl=ta.getTimezoneOffset(),tc=Math.min(tl,tu);return 0>ti?ts()[tr+32>>2>>>0]=Number(tu!=tl&&tc==to):0>2>>>0]=tn.getDay(),ti=(tn.getTime()-ta.getTime())/864e5|0,ts()[tr+28>>2>>>0]=ti,ts()[tr>>2>>>0]=tn.getSeconds(),ts()[tr+4>>2>>>0]=tn.getMinutes(),ts()[tr+8>>2>>>0]=tn.getHours(),ts()[tr+12>>2>>>0]=tn.getDate(),ts()[tr+16>>2>>>0]=tn.getMonth(),tn.getTime()/1e3|0},Aa:ej,Ba:eM,Sa:function tr(tn,ti,to){tr.Ac||(tr.Ac=!0,eV(tn,ti,to))},y:function(){t5("")},U:function(){if(!tS&&!tT){var tr="Blocking on the main thread is very dangerous, see https://emscripten.org/docs/porting/pthreads.html#blocking-on-the-main-browser-thread";eh||(eh={}),eh[tr]||(eh[tr]=1,tS&&(tr="warning: "+tr),tk(tr))}},ra:function(){return 4294901760},B:eg,Ia:function(tr,tn,ti){ta().copyWithin(tr>>>0,tn>>>0,tn+ti>>>0)},F:function(){return tS?ti(3993).cpus().length:navigator.hardwareConcurrency},Da:function(tr,tn,ti){eG.length=tn,ti>>=3;for(var to=0;to>>0];return(0>tr?er[-tr-1]:e8[tr]).apply(null,eG)},qa:function(tr){var tn=ta().length;if((tr>>>=0)<=tn||4294901760=ti;ti*=2){var to=tn*(1+.2/ti);to=Math.min(to,tr+100663296);var ts=Math;to=Math.max(tr,to),ts=ts.min.call(ts,4294901760,to+(65536-to%65536)%65536);t:{try{tF.grow(ts-tL.byteLength+65535>>>16),tX(tF.buffer);var tu=1;break t}catch(tr){}tu=void 0}if(tu)return!0}return!1},Na:function(){throw"unwind"},Ga:eq,Ha:eX,J:es,I:eY,S:eK,ga:eZ,R:e0,d:function(){return e1},na:function tr(to,ta){tr.lc||(tr.lc=function(){if("object"==typeof crypto&&"function"==typeof crypto.getRandomValues){var tr=new Uint8Array(1);return()=>(crypto.getRandomValues(tr),tr[0])}if(tS)try{var tn=ti(Object(function(){var tr=Error("Cannot find module 'crypto'");throw tr.code="MODULE_NOT_FOUND",tr}()));return()=>tn.randomBytes(1)[0]}catch(tr){}return()=>t5("randomDevice")}());for(var ts=0;ts>0>>>0]=tr.lc();return 0},ia:function(tr,tn,ti){var to=rd();try{return ed(tr)(tn,ti)}catch(tr){if(rh(to),tr!==tr+0)throw tr;rp(1,0)}},ja:function(tr,tn,ti){var to=rd();try{return ed(tr)(tn,ti)}catch(tr){if(rh(to),tr!==tr+0)throw tr;rp(1,0)}},K:function(tr){var tn=rd();try{return ed(tr)()}catch(tr){if(rh(tn),tr!==tr+0)throw tr;rp(1,0)}},f:function(tr,tn){var ti=rd();try{return ed(tr)(tn)}catch(tr){if(rh(ti),tr!==tr+0)throw tr;rp(1,0)}},P:function(tr,tn,ti){var to=rd();try{return ed(tr)(tn,ti)}catch(tr){if(rh(to),tr!==tr+0)throw tr;rp(1,0)}},Q:function(tr,tn,ti){var to=rd();try{return ed(tr)(tn,ti)}catch(tr){if(rh(to),tr!==tr+0)throw tr;rp(1,0)}},k:function(tr,tn,ti){var to=rd();try{return ed(tr)(tn,ti)}catch(tr){if(rh(to),tr!==tr+0)throw tr;rp(1,0)}},p:function(tr,tn,ti,to){var ta=rd();try{return ed(tr)(tn,ti,to)}catch(tr){if(rh(ta),tr!==tr+0)throw tr;rp(1,0)}},q:function(tr,tn,ti,to,ta){var ts=rd();try{return ed(tr)(tn,ti,to,ta)}catch(tr){if(rh(ts),tr!==tr+0)throw tr;rp(1,0)}},N:function(tr,tn,ti,to,ta,ts){var tu=rd();try{return ed(tr)(tn,ti,to,ta,ts)}catch(tr){if(rh(tu),tr!==tr+0)throw tr;rp(1,0)}},s:function(tr,tn,ti,to,ta,ts){var tu=rd();try{return ed(tr)(tn,ti,to,ta,ts)}catch(tr){if(rh(tu),tr!==tr+0)throw tr;rp(1,0)}},w:function(tr,tn,ti,to,ta,ts,tu){var tl=rd();try{return ed(tr)(tn,ti,to,ta,ts,tu)}catch(tr){if(rh(tl),tr!==tr+0)throw tr;rp(1,0)}},L:function(tr,tn,ti,to,ta,ts,tu,tl){var tc=rd();try{return ed(tr)(tn,ti,to,ta,ts,tu,tl)}catch(tr){if(rh(tc),tr!==tr+0)throw tr;rp(1,0)}},E:function(tr,tn,ti,to,ta,ts,tu,tl,tc,tp,tf,td){var th=rd();try{return ed(tr)(tn,ti,to,ta,ts,tu,tl,tc,tp,tf,td)}catch(tr){if(rh(th),tr!==tr+0)throw tr;rp(1,0)}},aa:function(tr,tn,ti,to,ta,ts,tu,tl){var tc=rd();try{return rA(tr,tn,ti,to,ta,ts,tu,tl)}catch(tr){if(rh(tc),tr!==tr+0)throw tr;rp(1,0)}},_:function(tr,tn,ti,to,ta,ts,tu){var tl=rd();try{return r_(tr,tn,ti,to,ta,ts,tu)}catch(tr){if(rh(tl),tr!==tr+0)throw tr;rp(1,0)}},Z:function(tr,tn,ti,to,ta){var ts=rd();try{return rE(tr,tn,ti,to,ta)}catch(tr){if(rh(ts),tr!==tr+0)throw tr;rp(1,0)}},ca:function(tr,tn,ti,to){var ta=rd();try{return rS(tr,tn,ti,to)}catch(tr){if(rh(ta),tr!==tr+0)throw tr;rp(1,0)}},$:function(tr){var tn=rd();try{return ry(tr)}catch(tr){if(rh(tn),tr!==tr+0)throw tr;rp(1,0)}},ba:function(tr,tn){var ti=rd();try{return rO(tr,tn)}catch(tr){if(rh(ti),tr!==tr+0)throw tr;rp(1,0)}},Y:function(tr,tn,ti){var to=rd();try{return rv(tr,tn,ti)}catch(tr){if(rh(to),tr!==tr+0)throw tr;rp(1,0)}},g:function(tr){var tn=rd();try{ed(tr)()}catch(tr){if(rh(tn),tr!==tr+0)throw tr;rp(1,0)}},r:function(tr,tn){var ti=rd();try{ed(tr)(tn)}catch(tr){if(rh(ti),tr!==tr+0)throw tr;rp(1,0)}},i:function(tr,tn,ti){var to=rd();try{ed(tr)(tn,ti)}catch(tr){if(rh(to),tr!==tr+0)throw tr;rp(1,0)}},ha:function(tr,tn,ti,to){var ta=rd();try{ed(tr)(tn,ti,to)}catch(tr){if(rh(ta),tr!==tr+0)throw tr;rp(1,0)}},m:function(tr,tn,ti,to){var ta=rd();try{ed(tr)(tn,ti,to)}catch(tr){if(rh(ta),tr!==tr+0)throw tr;rp(1,0)}},v:function(tr,tn,ti,to,ta){var ts=rd();try{ed(tr)(tn,ti,to,ta)}catch(tr){if(rh(ts),tr!==tr+0)throw tr;rp(1,0)}},u:function(tr,tn,ti,to,ta,ts){var tu=rd();try{ed(tr)(tn,ti,to,ta,ts)}catch(tr){if(rh(tu),tr!==tr+0)throw tr;rp(1,0)}},O:function(tr,tn,ti,to,ta,ts,tu){var tl=rd();try{ed(tr)(tn,ti,to,ta,ts,tu)}catch(tr){if(rh(tl),tr!==tr+0)throw tr;rp(1,0)}},A:function(tr,tn,ti,to,ta,ts,tu,tl){var tc=rd();try{ed(tr)(tn,ti,to,ta,ts,tu,tl)}catch(tr){if(rh(tc),tr!==tr+0)throw tr;rp(1,0)}},ka:function(tr,tn,ti,to,ta,ts,tu,tl,tc){var tp=rd();try{ed(tr)(tn,ti,to,ta,ts,tu,tl,tc)}catch(tr){if(rh(tp),tr!==tr+0)throw tr;rp(1,0)}},C:function(tr,tn,ti,to,ta,ts,tu,tl,tc,tp,tf){var td=rd();try{ed(tr)(tn,ti,to,ta,ts,tu,tl,tc,tp,tf)}catch(tr){if(rh(td),tr!==tr+0)throw tr;rp(1,0)}},D:function(tr,tn,ti,to,ta,ts,tu,tl,tc,tp,tf,td,th,tg,tb,tm){var ty=rd();try{ed(tr)(tn,ti,to,ta,ts,tu,tl,tc,tp,tf,td,th,tg,tb,tm)}catch(tr){if(rh(ty),tr!==tr+0)throw tr;rp(1,0)}},fa:function(tr,tn,ti,to,ta,ts,tu,tl){var tc=rd();try{rx(tr,tn,ti,to,ta,ts,tu,tl)}catch(tr){if(rh(tc),tr!==tr+0)throw tr;rp(1,0)}},da:function(tr,tn,ti,to,ta,ts,tu,tl,tc,tp,tf,td){var th=rd();try{rT(tr,tn,ti,to,ta,ts,tu,tl,tc,tp,tf,td)}catch(tr){if(rh(th),tr!==tr+0)throw tr;rp(1,0)}},ea:function(tr,tn,ti,to,ta,ts){var tu=rd();try{rw(tr,tn,ti,to,ta,ts)}catch(tr){if(rh(tu),tr!==tr+0)throw tr;rp(1,0)}},o:function(tr){return tr},a:tF||tc.wasmMemory,G:function(tr){e1=tr},la:e6,z:function(tr,tn,ti,to){return e6(tr,tn,ti,to)}};(function(){function tr(tr,tn){tc.asm=tr.exports,eu.qc.push(tc.asm.sb),tK=tc.asm.ub,tJ.unshift(tc.asm.Va),tN=tn,tO||(t4--,tc.monitorRunDependencies&&tc.monitorRunDependencies(t4),0==t4&&(null!==t6&&(clearInterval(t6),t6=null),t8&&(tr=t8,t8=null,tr())))}function tn(tn){tr(tn.instance,tn.module)}function ti(tr){return(function(){if(!tD&&(tw||tT)){if("function"==typeof fetch&&!t3.startsWith("file://"))return fetch(t3,{credentials:"same-origin"}).then(function(tr){if(!tr.ok)throw"failed to load wasm binary file at '"+t3+"'";return tr.arrayBuffer()}).catch(function(){return t9()});if(th)return new Promise(function(tr,tn){th(t3,function(tn){tr(new Uint8Array(tn))},tn)})}return Promise.resolve().then(function(){return t9()})})().then(function(tr){return WebAssembly.instantiate(tr,to)}).then(function(tr){return tr}).then(tr,function(tr){tk("failed to asynchronously prepare wasm: "+tr),t5(tr)})}var to={a:e5};if(tO||(t4++,tc.monitorRunDependencies&&tc.monitorRunDependencies(t4)),tc.instantiateWasm)try{return tc.instantiateWasm(to,tr)}catch(tr){return tk("Module.instantiateWasm callback failed with error: "+tr),!1}(tD||"function"!=typeof WebAssembly.instantiateStreaming||t7()||t3.startsWith("file://")||tS||"function"!=typeof fetch?ti(tn):fetch(t3,{credentials:"same-origin"}).then(function(tr){return WebAssembly.instantiateStreaming(tr,to).then(tn,function(tr){return tk("wasm streaming compile failed: "+tr),tk("falling back to ArrayBuffer instantiation"),ti(tn)})})).catch(tf)})(),tc.___wasm_call_ctors=function(){return(tc.___wasm_call_ctors=tc.asm.Va).apply(null,arguments)},tc._OrtInit=function(){return(tc._OrtInit=tc.asm.Wa).apply(null,arguments)},tc._OrtCreateSessionOptions=function(){return(tc._OrtCreateSessionOptions=tc.asm.Xa).apply(null,arguments)},tc._OrtAppendExecutionProvider=function(){return(tc._OrtAppendExecutionProvider=tc.asm.Ya).apply(null,arguments)},tc._OrtAddSessionConfigEntry=function(){return(tc._OrtAddSessionConfigEntry=tc.asm.Za).apply(null,arguments)},tc._OrtReleaseSessionOptions=function(){return(tc._OrtReleaseSessionOptions=tc.asm._a).apply(null,arguments)},tc._OrtCreateSession=function(){return(tc._OrtCreateSession=tc.asm.$a).apply(null,arguments)},tc._OrtReleaseSession=function(){return(tc._OrtReleaseSession=tc.asm.ab).apply(null,arguments)},tc._OrtGetInputCount=function(){return(tc._OrtGetInputCount=tc.asm.bb).apply(null,arguments)},tc._OrtGetOutputCount=function(){return(tc._OrtGetOutputCount=tc.asm.cb).apply(null,arguments)},tc._OrtGetInputName=function(){return(tc._OrtGetInputName=tc.asm.db).apply(null,arguments)},tc._OrtGetOutputName=function(){return(tc._OrtGetOutputName=tc.asm.eb).apply(null,arguments)},tc._OrtFree=function(){return(tc._OrtFree=tc.asm.fb).apply(null,arguments)},tc._OrtCreateTensor=function(){return(tc._OrtCreateTensor=tc.asm.gb).apply(null,arguments)},tc._OrtGetTensorData=function(){return(tc._OrtGetTensorData=tc.asm.hb).apply(null,arguments)},tc._OrtReleaseTensor=function(){return(tc._OrtReleaseTensor=tc.asm.ib).apply(null,arguments)},tc._OrtCreateRunOptions=function(){return(tc._OrtCreateRunOptions=tc.asm.jb).apply(null,arguments)},tc._OrtAddRunConfigEntry=function(){return(tc._OrtAddRunConfigEntry=tc.asm.kb).apply(null,arguments)},tc._OrtReleaseRunOptions=function(){return(tc._OrtReleaseRunOptions=tc.asm.lb).apply(null,arguments)},tc._OrtRun=function(){return(tc._OrtRun=tc.asm.mb).apply(null,arguments)},tc._OrtEndProfiling=function(){return(tc._OrtEndProfiling=tc.asm.nb).apply(null,arguments)};var e7=tc._pthread_self=function(){return(e7=tc._pthread_self=tc.asm.ob).apply(null,arguments)},e9=tc._malloc=function(){return(e9=tc._malloc=tc.asm.pb).apply(null,arguments)},rr=tc._free=function(){return(rr=tc._free=tc.asm.qb).apply(null,arguments)},rn=tc._fflush=function(){return(rn=tc._fflush=tc.asm.rb).apply(null,arguments)};tc.__emscripten_tls_init=function(){return(tc.__emscripten_tls_init=tc.asm.sb).apply(null,arguments)};var ri=tc.___funcs_on_exit=function(){return(ri=tc.___funcs_on_exit=tc.asm.tb).apply(null,arguments)},ro=tc.__emscripten_thread_init=function(){return(ro=tc.__emscripten_thread_init=tc.asm.vb).apply(null,arguments)};tc.__emscripten_thread_crashed=function(){return(tc.__emscripten_thread_crashed=tc.asm.wb).apply(null,arguments)};var ra,rs=tc._emscripten_run_in_main_runtime_thread_js=function(){return(rs=tc._emscripten_run_in_main_runtime_thread_js=tc.asm.xb).apply(null,arguments)},ru=tc.__emscripten_proxy_execute_task_queue=function(){return(ru=tc.__emscripten_proxy_execute_task_queue=tc.asm.yb).apply(null,arguments)},rl=tc.__emscripten_thread_free_data=function(){return(rl=tc.__emscripten_thread_free_data=tc.asm.zb).apply(null,arguments)},rc=tc.__emscripten_thread_exit=function(){return(rc=tc.__emscripten_thread_exit=tc.asm.Ab).apply(null,arguments)},rp=tc._setThrew=function(){return(rp=tc._setThrew=tc.asm.Bb).apply(null,arguments)},rf=tc._emscripten_stack_set_limits=function(){return(rf=tc._emscripten_stack_set_limits=tc.asm.Cb).apply(null,arguments)},rd=tc.stackSave=function(){return(rd=tc.stackSave=tc.asm.Db).apply(null,arguments)},rh=tc.stackRestore=function(){return(rh=tc.stackRestore=tc.asm.Eb).apply(null,arguments)},rg=tc.stackAlloc=function(){return(rg=tc.stackAlloc=tc.asm.Fb).apply(null,arguments)},rb=tc.___cxa_can_catch=function(){return(rb=tc.___cxa_can_catch=tc.asm.Gb).apply(null,arguments)},rm=tc.___cxa_is_pointer_type=function(){return(rm=tc.___cxa_is_pointer_type=tc.asm.Hb).apply(null,arguments)},ry=tc.dynCall_j=function(){return(ry=tc.dynCall_j=tc.asm.Ib).apply(null,arguments)},r_=tc.dynCall_iiiiij=function(){return(r_=tc.dynCall_iiiiij=tc.asm.Jb).apply(null,arguments)},rv=tc.dynCall_jii=function(){return(rv=tc.dynCall_jii=tc.asm.Kb).apply(null,arguments)},rx=tc.dynCall_viiiiij=function(){return(rx=tc.dynCall_viiiiij=tc.asm.Lb).apply(null,arguments)},rw=tc.dynCall_vjji=function(){return(rw=tc.dynCall_vjji=tc.asm.Mb).apply(null,arguments)},rT=tc.dynCall_viiijjjii=function(){return(rT=tc.dynCall_viiijjjii=tc.asm.Nb).apply(null,arguments)},rS=tc.dynCall_iij=function(){return(rS=tc.dynCall_iij=tc.asm.Ob).apply(null,arguments)},rO=tc.dynCall_ji=function(){return(rO=tc.dynCall_ji=tc.asm.Pb).apply(null,arguments)},rA=tc.dynCall_iiiiiij=function(){return(rA=tc.dynCall_iiiiiij=tc.asm.Qb).apply(null,arguments)},rE=tc.dynCall_iiij=function(){return(rE=tc.dynCall_iiij=tc.asm.Rb).apply(null,arguments)};function rI(){function tr(){if(!ra&&(ra=!0,tc.calledRun=!0,!tB)&&(tO||el(tJ),tp(tc),tc.onRuntimeInitialized&&tc.onRuntimeInitialized(),!tO)){if(tc.postRun)for("function"==typeof tc.postRun&&(tc.postRun=[tc.postRun]);tc.postRun.length;){var tr=tc.postRun.shift();t0.unshift(tr)}el(t0)}}if(!(0{var to,ta=(to=(to="undefined"!=typeof document&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(tr){tr=tr||{},tn||(tn=void 0!==tr?tr:{}),tn.ready=new Promise(function(tr,tn){ta=tr,ts=tn});var tn,ta,ts,tu,tl,tc,tp,tf,td,th=Object.assign({},tn),tg="./this.program",tb=(tr,tn)=>{throw tn},tm="object"==typeof window,ty="function"==typeof importScripts,t_="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node,tv="";t_?(tv=ty?ti(908).dirname(tv)+"/":"//",td=()=>{tf||(tp=ti(1384),tf=ti(908))},tu=function(tr,tn){return td(),tr=tf.normalize(tr),tp.readFileSync(tr,tn?void 0:"utf8")},tc=tr=>((tr=tu(tr,!0)).buffer||(tr=new Uint8Array(tr)),tr),tl=(tr,tn,ti)=>{td(),tr=tf.normalize(tr),tp.readFile(tr,function(tr,to){tr?ti(tr):tn(to.buffer)})},1{if(tS||0{var tn=new XMLHttpRequest;return tn.open("GET",tr,!1),tn.send(null),tn.responseText},ty&&(tc=tr=>{var tn=new XMLHttpRequest;return tn.open("GET",tr,!1),tn.responseType="arraybuffer",tn.send(null),new Uint8Array(tn.response)}),tl=(tr,tn,ti)=>{var to=new XMLHttpRequest;to.open("GET",tr,!0),to.responseType="arraybuffer",to.onload=()=>{200==to.status||0==to.status&&to.response?tn(to.response):ti()},to.onerror=ti,to.send(null)});var tx,tw=tn.print||console.log.bind(console),tT=tn.printErr||console.warn.bind(console);Object.assign(tn,th),th=null,tn.thisProgram&&(tg=tn.thisProgram),tn.quit&&(tb=tn.quit),tn.wasmBinary&&(tx=tn.wasmBinary);var tS=tn.noExitRuntime||!1;"object"!=typeof WebAssembly&&tY("no native wasm support detected");var tO,tA,tE,tI,tP,tD,t$=!1,tk="undefined"!=typeof TextDecoder?new TextDecoder("utf8"):void 0;function tC(tr,tn,ti){var to=(tn>>>=0)+ti;for(ti=tn;tr[ti]&&!(ti>=to);)++ti;if(16(ta=224==(240&ta)?(15&ta)<<12|ts<<6|tu:(7&ta)<<18|ts<<12|tu<<6|63&tr[tn++])?to+=String.fromCharCode(ta):(ta-=65536,to+=String.fromCharCode(55296|ta>>10,56320|1023&ta))}}else to+=String.fromCharCode(ta)}return to}function tF(tr,tn){return(tr>>>=0)?tC(tI,tr,tn):""}function tN(tr,tn,ti,to){if(!(0>>=0;to=ti+to-1;for(var ts=0;ts=tu&&(tu=65536+((1023&tu)<<10)|1023&tr.charCodeAt(++ts)),127>=tu){if(ti>=to)break;tn[ti++>>>0]=tu}else{if(2047>=tu){if(ti+1>=to)break;tn[ti++>>>0]=192|tu>>6}else{if(65535>=tu){if(ti+2>=to)break;tn[ti++>>>0]=224|tu>>12}else{if(ti+3>=to)break;tn[ti++>>>0]=240|tu>>18,tn[ti++>>>0]=128|tu>>12&63}tn[ti++>>>0]=128|tu>>6&63}tn[ti++>>>0]=128|63&tu}}return tn[ti>>>0]=0,ti-ta}function tL(tr){for(var tn=0,ti=0;ti=to?tn++:2047>=to?tn+=2:55296<=to&&57343>=to?(tn+=4,++ti):tn+=3}return tn}function tR(){var tr=tO.buffer;tA=tr,tn.HEAP8=tE=new Int8Array(tr),tn.HEAP16=new Int16Array(tr),tn.HEAP32=tP=new Int32Array(tr),tn.HEAPU8=tI=new Uint8Array(tr),tn.HEAPU16=new Uint16Array(tr),tn.HEAPU32=tD=new Uint32Array(tr),tn.HEAPF32=new Float32Array(tr),tn.HEAPF64=new Float64Array(tr)}var tj,tM=[],tU=[],tV=[],tB=[],tz=0;function tG(){var tr=tn.preRun.shift();tM.unshift(tr)}var tH,tW=0,tq=null,tX=null;function tY(tr){throw tn.onAbort&&tn.onAbort(tr),tT(tr="Aborted("+tr+")"),t$=!0,ts(tr=new WebAssembly.RuntimeError(tr+". Build with -sASSERTIONS for more info.")),tr}function tK(){return tH.startsWith("data:application/octet-stream;base64,")}if(tH="ort-wasm.wasm",!tK()){var tZ=tH;tH=tn.locateFile?tn.locateFile(tZ,tv):tv+tZ}function tJ(){var tr=tH;try{if(tr==tH&&tx)return new Uint8Array(tx);if(tc)return tc(tr);throw"both async and sync fetching of the wasm failed"}catch(tr){tY(tr)}}function tQ(tr){this.name="ExitStatus",this.message="Program terminated with exit("+tr+")",this.status=tr}function t0(tr){for(;0>2>>>0]=tr},this.Eb=function(){return tD[this.zb+4>>2>>>0]},this.Sb=function(tr){tD[this.zb+8>>2>>>0]=tr},this.Wb=function(){return tD[this.zb+8>>2>>>0]},this.Tb=function(){tP[this.zb>>2>>>0]=0},this.Ib=function(tr){tE[this.zb+12>>0>>>0]=tr?1:0},this.Pb=function(){return 0!=tE[this.zb+12>>0>>>0]},this.Jb=function(tr){tE[this.zb+13>>0>>>0]=tr?1:0},this.Lb=function(){return 0!=tE[this.zb+13>>0>>>0]},this.Rb=function(tr,tn){this.Fb(0),this.Ub(tr),this.Sb(tn),this.Tb(),this.Ib(!1),this.Jb(!1)},this.Nb=function(){tP[this.zb>>2>>>0]+=1},this.Xb=function(){var tr=tP[this.zb>>2>>>0];return tP[this.zb>>2>>>0]=tr-1,1===tr},this.Fb=function(tr){tD[this.zb+16>>2>>>0]=tr},this.Ob=function(){return tD[this.zb+16>>2>>>0]},this.Qb=function(){if(ew(this.Eb()))return tD[this.Db>>2>>>0];var tr=this.Ob();return 0!==tr?tr:this.Db}}function t6(tr){return eh(new t4(tr).zb)}var t8=[];function t5(tr){var tn=t8[tr];return tn||(tr>=t8.length&&(t8.length=tr+1),t8[tr]=tn=tj.get(tr)),tn}function t7(tr){var tn=tL(tr)+1,ti=ed(tn);return ti&&tN(tr,tE,ti,tn),ti}var t9={};function er(){if(!en){var tr,tn={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:("object"==typeof navigator&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:tg||"./this.program"};for(tr in t9)void 0===t9[tr]?delete tn[tr]:tn[tr]=t9[tr];var ti=[];for(tr in tn)ti.push(tr+"="+tn[tr]);en=ti}return en}var en,ei=[null,[],[]];function eo(tr,tn){var ti=ei[tr];0===tn||10===tn?((1===tr?tw:tT)(tC(ti,0)),ti.length=0):ti.push(tn)}var ea=0;function es(tr){return 0==tr%4&&(0!=tr%100||0==tr%400)}var eu=[31,29,31,30,31,30,31,31,30,31,30,31],el=[31,28,31,30,31,30,31,31,30,31,30,31];function ec(tr,tn,ti,to){function ta(tr,tn,ti){for(tr="number"==typeof tr?tr.toString():tr||"";tr.lengthtr?-1:0to-tr.getDate())){tr.setDate(tr.getDate()+tn);break}tn-=to-tr.getDate()+1,tr.setDate(1),11>ti?tr.setMonth(ti+1):(tr.setMonth(0),tr.setFullYear(tr.getFullYear()+1))}return ti=new Date(tr.getFullYear()+1,0,4),tn=tl(new Date(tr.getFullYear(),0,4)),ti=tl(ti),0>=tu(tn,tr)?0>=tu(ti,tr)?tr.getFullYear()+1:tr.getFullYear():tr.getFullYear()-1}var tp=tP[to+40>>2>>>0];for(var tf in to={$b:tP[to>>2>>>0],Zb:tP[to+4>>2>>>0],Gb:tP[to+8>>2>>>0],Kb:tP[to+12>>2>>>0],Hb:tP[to+16>>2>>>0],Cb:tP[to+20>>2>>>0],Ab:tP[to+24>>2>>>0],Bb:tP[to+28>>2>>>0],bc:tP[to+32>>2>>>0],Yb:tP[to+36>>2>>>0],ac:tp?tF(tp):""},ti=tF(ti),tp={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})ti=ti.replace(RegExp(tf,"g"),tp[tf]);var td="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),th="January February March April May June July August September October November December".split(" ");for(tf in tp={"%a":function(tr){return td[tr.Ab].substring(0,3)},"%A":function(tr){return td[tr.Ab]},"%b":function(tr){return th[tr.Hb].substring(0,3)},"%B":function(tr){return th[tr.Hb]},"%C":function(tr){return ts((tr.Cb+1900)/100|0,2)},"%d":function(tr){return ts(tr.Kb,2)},"%e":function(tr){return ta(tr.Kb,2," ")},"%g":function(tr){return tc(tr).toString().substring(2)},"%G":function(tr){return tc(tr)},"%H":function(tr){return ts(tr.Gb,2)},"%I":function(tr){return 0==(tr=tr.Gb)?tr=12:12tr.Gb?"AM":"PM"},"%S":function(tr){return ts(tr.$b,2)},"%t":function(){return" "},"%u":function(tr){return tr.Ab||7},"%U":function(tr){return ts(Math.floor((tr.Bb+7-tr.Ab)/7),2)},"%V":function(tr){var tn=Math.floor((tr.Bb+7-(tr.Ab+6)%7)/7);if(2>=(tr.Ab+371-tr.Bb-2)%7&&tn++,tn)53==tn&&(4==(ti=(tr.Ab+371-tr.Bb)%7)||3==ti&&es(tr.Cb)||(tn=1));else{tn=52;var ti=(tr.Ab+7-tr.Bb-1)%7;(4==ti||5==ti&&es(tr.Cb%400-1))&&tn++}return ts(tn,2)},"%w":function(tr){return tr.Ab},"%W":function(tr){return ts(Math.floor((tr.Bb+7-(tr.Ab+6)%7)/7),2)},"%y":function(tr){return(tr.Cb+1900).toString().substring(2)},"%Y":function(tr){return tr.Cb+1900},"%z":function(tr){var tn=0<=(tr=tr.Yb);return(tn?"+":"-")+String("0000"+((tr=Math.abs(tr)/60)/60*100+tr%60)).slice(-4)},"%Z":function(tr){return tr.ac},"%%":function(){return"%"}},ti=ti.replace(/%%/g,"\x00\x00"),tp)ti.includes(tf)&&(ti=ti.replace(RegExp(tf,"g"),tp[tf](to)));return(tf=function(tr){var tn=Array(tL(tr)+1);return tN(tr,tn,0,tn.length),tn}(ti=ti.replace(/\0\0/g,"%"))).length>tn?0:(tE.set(tf,tr>>>0),tf.length-1)}var ep={a:function(tr){return ed(tr+24)+24},m:function(tr){return(tr=new t4(tr)).Pb()||(tr.Ib(!0),t2--),tr.Jb(!1),t1.push(tr),tr.Nb(),tr.Qb()},ia:function(tr){throw tT("Unexpected exception thrown, this is not properly supported - aborting"),t$=!0,tr},w:function(){em(0);var tr=t1.pop();if(tr.Xb()&&!tr.Lb()){var tn=tr.Wb();tn&&t5(tn)(tr.Db),t6(tr.Db)}t3=0},d:function(){var tr=t3;if(!tr)return ea=0;var tn=new t4(tr);tn.Fb(tr);var ti=tn.Eb();if(!ti)return ea=0,tr;for(var to=Array.prototype.slice.call(arguments),ta=0;ta>>2]+4294967296*tP[tr+4>>>2])),tP[tn>>2>>>0]=tr.getUTCSeconds(),tP[tn+4>>2>>>0]=tr.getUTCMinutes(),tP[tn+8>>2>>>0]=tr.getUTCHours(),tP[tn+12>>2>>>0]=tr.getUTCDate(),tP[tn+16>>2>>>0]=tr.getUTCMonth(),tP[tn+20>>2>>>0]=tr.getUTCFullYear()-1900,tP[tn+24>>2>>>0]=tr.getUTCDay(),tP[tn+28>>2>>>0]=(tr.getTime()-Date.UTC(tr.getUTCFullYear(),0,1,0,0,0,0))/864e5|0},Ea:function(tr,tn){tr=new Date(1e3*(tD[tr>>>2]+4294967296*tP[tr+4>>>2])),tP[tn>>2>>>0]=tr.getSeconds(),tP[tn+4>>2>>>0]=tr.getMinutes(),tP[tn+8>>2>>>0]=tr.getHours(),tP[tn+12>>2>>>0]=tr.getDate(),tP[tn+16>>2>>>0]=tr.getMonth(),tP[tn+20>>2>>>0]=tr.getFullYear()-1900,tP[tn+24>>2>>>0]=tr.getDay();var ti=new Date(tr.getFullYear(),0,1);tP[tn+28>>2>>>0]=(tr.getTime()-ti.getTime())/864e5|0,tP[tn+36>>2>>>0]=-60*tr.getTimezoneOffset();var to=new Date(tr.getFullYear(),6,1).getTimezoneOffset();ti=ti.getTimezoneOffset(),tP[tn+32>>2>>>0]=0|(to!=ti&&tr.getTimezoneOffset()==Math.min(ti,to))},Fa:function(tr){var tn=new Date(tP[tr+20>>2>>>0]+1900,tP[tr+16>>2>>>0],tP[tr+12>>2>>>0],tP[tr+8>>2>>>0],tP[tr+4>>2>>>0],tP[tr>>2>>>0],0),ti=tP[tr+32>>2>>>0],to=tn.getTimezoneOffset(),ta=new Date(tn.getFullYear(),0,1),ts=new Date(tn.getFullYear(),6,1).getTimezoneOffset(),tu=ta.getTimezoneOffset(),tl=Math.min(tu,ts);return 0>ti?tP[tr+32>>2>>>0]=Number(ts!=tu&&tl==to):0>2>>>0]=tn.getDay(),tP[tr+28>>2>>>0]=(tn.getTime()-ta.getTime())/864e5|0,tP[tr>>2>>>0]=tn.getSeconds(),tP[tr+4>>2>>>0]=tn.getMinutes(),tP[tr+8>>2>>>0]=tn.getHours(),tP[tr+12>>2>>>0]=tn.getDate(),tP[tr+16>>2>>>0]=tn.getMonth(),tn.getTime()/1e3|0},sa:function(){return -52},ta:function(){},Ga:function tr(tn,ti,to){tr.Vb||(tr.Vb=!0,function(tr,tn,ti){function to(tr){return(tr=tr.toTimeString().match(/\(([A-Za-z ]+)\)$/))?tr[1]:"GMT"}var ta=(new Date).getFullYear(),ts=new Date(ta,0,1),tu=new Date(ta,6,1);ta=ts.getTimezoneOffset();var tl=tu.getTimezoneOffset();tP[tr>>2>>>0]=60*Math.max(ta,tl),tP[tn>>2>>>0]=Number(ta!=tl),tr=to(ts),tn=to(tu),tr=t7(tr),tn=t7(tn),tl>2>>>0]=tr,tD[ti+4>>2>>>0]=tn):(tD[ti>>2>>>0]=tn,tD[ti+4>>2>>>0]=tr)}(tn,ti,to))},B:function(){tY("")},ma:function(){return 4294901760},I:t_?()=>{var tr=process.hrtime();return 1e3*tr[0]+tr[1]/1e6}:()=>performance.now(),xa:function(tr,tn,ti){tI.copyWithin(tr>>>0,tn>>>0,tn+ti>>>0)},G:function(tr){var tn=tI.length;if(4294901760<(tr>>>=0))return!1;for(var ti=1;4>=ti;ti*=2){var to=tn*(1+.2/ti);to=Math.min(to,tr+100663296);var ta=Math;to=Math.max(tr,to),ta=ta.min.call(ta,4294901760,to+(65536-to%65536)%65536);t:{try{tO.grow(ta-tA.byteLength+65535>>>16),tR();var ts=1;break t}catch(tr){}ts=void 0}if(ts)return!0}return!1},va:function(tr,tn){var ti=0;return er().forEach(function(to,ta){var ts=tn+ti;for(ta=tD[tr+4*ta>>2>>>0]=ts,ts=0;ts>0>>>0]=to.charCodeAt(ts);tE[ta>>0>>>0]=0,ti+=to.length+1}),0},wa:function(tr,tn){var ti=er();tD[tr>>2>>>0]=ti.length;var to=0;return ti.forEach(function(tr){to+=tr.length+1}),tD[tn>>2>>>0]=to,0},ba:function(tr){tS||0>2>>>0],tl=tD[tn+4>>2>>>0];tn+=8;for(var tc=0;tc>>0]);ta+=tl}return tD[to>>2>>>0]=ta,0},c:function(){return ea},ja:function tr(tn,to){tr.Mb||(tr.Mb=function(){if("object"==typeof crypto&&"function"==typeof crypto.getRandomValues){var tr=new Uint8Array(1);return()=>(crypto.getRandomValues(tr),tr[0])}if(t_)try{var tn=ti(Object(function(){var tr=Error("Cannot find module 'crypto'");throw tr.code="MODULE_NOT_FOUND",tr}()));return()=>tn.randomBytes(1)[0]}catch(tr){}return()=>tY("randomDevice")}());for(var ta=0;ta>0>>>0]=tr.Mb();return 0},ea:function(tr,tn,ti){var to=ey();try{return t5(tr)(tn,ti)}catch(tr){if(e_(to),tr!==tr+0)throw tr;em(1,0)}},fa:function(tr,tn,ti){var to=ey();try{return t5(tr)(tn,ti)}catch(tr){if(e_(to),tr!==tr+0)throw tr;em(1,0)}},J:function(tr){var tn=ey();try{return t5(tr)()}catch(tr){if(e_(tn),tr!==tr+0)throw tr;em(1,0)}},e:function(tr,tn){var ti=ey();try{return t5(tr)(tn)}catch(tr){if(e_(ti),tr!==tr+0)throw tr;em(1,0)}},N:function(tr,tn,ti){var to=ey();try{return t5(tr)(tn,ti)}catch(tr){if(e_(to),tr!==tr+0)throw tr;em(1,0)}},O:function(tr,tn,ti){var to=ey();try{return t5(tr)(tn,ti)}catch(tr){if(e_(to),tr!==tr+0)throw tr;em(1,0)}},j:function(tr,tn,ti){var to=ey();try{return t5(tr)(tn,ti)}catch(tr){if(e_(to),tr!==tr+0)throw tr;em(1,0)}},o:function(tr,tn,ti,to){var ta=ey();try{return t5(tr)(tn,ti,to)}catch(tr){if(e_(ta),tr!==tr+0)throw tr;em(1,0)}},p:function(tr,tn,ti,to,ta){var ts=ey();try{return t5(tr)(tn,ti,to,ta)}catch(tr){if(e_(ts),tr!==tr+0)throw tr;em(1,0)}},M:function(tr,tn,ti,to,ta,ts){var tu=ey();try{return t5(tr)(tn,ti,to,ta,ts)}catch(tr){if(e_(tu),tr!==tr+0)throw tr;em(1,0)}},r:function(tr,tn,ti,to,ta,ts){var tu=ey();try{return t5(tr)(tn,ti,to,ta,ts)}catch(tr){if(e_(tu),tr!==tr+0)throw tr;em(1,0)}},v:function(tr,tn,ti,to,ta,ts,tu){var tl=ey();try{return t5(tr)(tn,ti,to,ta,ts,tu)}catch(tr){if(e_(tl),tr!==tr+0)throw tr;em(1,0)}},K:function(tr,tn,ti,to,ta,ts,tu,tl){var tc=ey();try{return t5(tr)(tn,ti,to,ta,ts,tu,tl)}catch(tr){if(e_(tc),tr!==tr+0)throw tr;em(1,0)}},D:function(tr,tn,ti,to,ta,ts,tu,tl,tc,tp,tf,td){var th=ey();try{return t5(tr)(tn,ti,to,ta,ts,tu,tl,tc,tp,tf,td)}catch(tr){if(e_(th),tr!==tr+0)throw tr;em(1,0)}},X:function(tr,tn,ti,to,ta,ts,tu,tl){var tc=ey();try{return e$(tr,tn,ti,to,ta,ts,tu,tl)}catch(tr){if(e_(tc),tr!==tr+0)throw tr;em(1,0)}},V:function(tr,tn,ti,to,ta,ts,tu){var tl=ey();try{return eS(tr,tn,ti,to,ta,ts,tu)}catch(tr){if(e_(tl),tr!==tr+0)throw tr;em(1,0)}},U:function(tr,tn,ti,to,ta){var ts=ey();try{return ek(tr,tn,ti,to,ta)}catch(tr){if(e_(ts),tr!==tr+0)throw tr;em(1,0)}},Z:function(tr,tn,ti,to){var ta=ey();try{return eP(tr,tn,ti,to)}catch(tr){if(e_(ta),tr!==tr+0)throw tr;em(1,0)}},W:function(tr){var tn=ey();try{return eT(tr)}catch(tr){if(e_(tn),tr!==tr+0)throw tr;em(1,0)}},Y:function(tr,tn){var ti=ey();try{return eD(tr,tn)}catch(tr){if(e_(ti),tr!==tr+0)throw tr;em(1,0)}},T:function(tr,tn,ti){var to=ey();try{return eO(tr,tn,ti)}catch(tr){if(e_(to),tr!==tr+0)throw tr;em(1,0)}},f:function(tr){var tn=ey();try{t5(tr)()}catch(tr){if(e_(tn),tr!==tr+0)throw tr;em(1,0)}},q:function(tr,tn){var ti=ey();try{t5(tr)(tn)}catch(tr){if(e_(ti),tr!==tr+0)throw tr;em(1,0)}},h:function(tr,tn,ti){var to=ey();try{t5(tr)(tn,ti)}catch(tr){if(e_(to),tr!==tr+0)throw tr;em(1,0)}},da:function(tr,tn,ti,to){var ta=ey();try{t5(tr)(tn,ti,to)}catch(tr){if(e_(ta),tr!==tr+0)throw tr;em(1,0)}},l:function(tr,tn,ti,to){var ta=ey();try{t5(tr)(tn,ti,to)}catch(tr){if(e_(ta),tr!==tr+0)throw tr;em(1,0)}},t:function(tr,tn,ti,to,ta){var ts=ey();try{t5(tr)(tn,ti,to,ta)}catch(tr){if(e_(ts),tr!==tr+0)throw tr;em(1,0)}},u:function(tr,tn,ti,to,ta,ts){var tu=ey();try{t5(tr)(tn,ti,to,ta,ts)}catch(tr){if(e_(tu),tr!==tr+0)throw tr;em(1,0)}},x:function(tr,tn,ti,to,ta,ts,tu){var tl=ey();try{t5(tr)(tn,ti,to,ta,ts,tu)}catch(tr){if(e_(tl),tr!==tr+0)throw tr;em(1,0)}},z:function(tr,tn,ti,to,ta,ts,tu,tl){var tc=ey();try{t5(tr)(tn,ti,to,ta,ts,tu,tl)}catch(tr){if(e_(tc),tr!==tr+0)throw tr;em(1,0)}},ga:function(tr,tn,ti,to,ta,ts,tu,tl,tc){var tp=ey();try{t5(tr)(tn,ti,to,ta,ts,tu,tl,tc)}catch(tr){if(e_(tp),tr!==tr+0)throw tr;em(1,0)}},A:function(tr,tn,ti,to,ta,ts,tu,tl,tc,tp,tf){var td=ey();try{t5(tr)(tn,ti,to,ta,ts,tu,tl,tc,tp,tf)}catch(tr){if(e_(td),tr!==tr+0)throw tr;em(1,0)}},C:function(tr,tn,ti,to,ta,ts,tu,tl,tc,tp,tf,td,th,tg,tb,tm){var ty=ey();try{t5(tr)(tn,ti,to,ta,ts,tu,tl,tc,tp,tf,td,th,tg,tb,tm)}catch(tr){if(e_(ty),tr!==tr+0)throw tr;em(1,0)}},aa:function(tr,tn,ti,to,ta,ts,tu,tl){var tc=ey();try{eA(tr,tn,ti,to,ta,ts,tu,tl)}catch(tr){if(e_(tc),tr!==tr+0)throw tr;em(1,0)}},_:function(tr,tn,ti,to,ta,ts,tu,tl,tc,tp,tf,td){var th=ey();try{eI(tr,tn,ti,to,ta,ts,tu,tl,tc,tp,tf,td)}catch(tr){if(e_(th),tr!==tr+0)throw tr;em(1,0)}},$:function(tr,tn,ti,to,ta,ts){var tu=ey();try{eE(tr,tn,ti,to,ta,ts)}catch(tr){if(e_(tu),tr!==tr+0)throw tr;em(1,0)}},n:function(tr){return tr},F:function(tr){ea=tr},ha:ec,y:function(tr,tn,ti,to){return ec(tr,tn,ti,to)}};(function(){function tr(tr){tn.asm=tr.exports,tO=tn.asm.Ka,tR(),tj=tn.asm.ib,tU.unshift(tn.asm.La),tW--,tn.monitorRunDependencies&&tn.monitorRunDependencies(tW),0==tW&&(null!==tq&&(clearInterval(tq),tq=null),tX&&(tr=tX,tX=null,tr()))}function ti(tn){tr(tn.instance)}function to(tr){return(function(){if(!tx&&(tm||ty)){if("function"==typeof fetch&&!tH.startsWith("file://"))return fetch(tH,{credentials:"same-origin"}).then(function(tr){if(!tr.ok)throw"failed to load wasm binary file at '"+tH+"'";return tr.arrayBuffer()}).catch(function(){return tJ()});if(tl)return new Promise(function(tr,tn){tl(tH,function(tn){tr(new Uint8Array(tn))},tn)})}return Promise.resolve().then(function(){return tJ()})})().then(function(tr){return WebAssembly.instantiate(tr,ta)}).then(function(tr){return tr}).then(tr,function(tr){tT("failed to asynchronously prepare wasm: "+tr),tY(tr)})}var ta={a:ep};if(tW++,tn.monitorRunDependencies&&tn.monitorRunDependencies(tW),tn.instantiateWasm)try{return tn.instantiateWasm(ta,tr)}catch(tr){return tT("Module.instantiateWasm callback failed with error: "+tr),!1}(tx||"function"!=typeof WebAssembly.instantiateStreaming||tK()||tH.startsWith("file://")||t_||"function"!=typeof fetch?to(ti):fetch(tH,{credentials:"same-origin"}).then(function(tr){return WebAssembly.instantiateStreaming(tr,ta).then(ti,function(tr){return tT("wasm streaming compile failed: "+tr),tT("falling back to ArrayBuffer instantiation"),to(ti)})})).catch(ts)})(),tn.___wasm_call_ctors=function(){return(tn.___wasm_call_ctors=tn.asm.La).apply(null,arguments)},tn._OrtInit=function(){return(tn._OrtInit=tn.asm.Ma).apply(null,arguments)},tn._OrtCreateSessionOptions=function(){return(tn._OrtCreateSessionOptions=tn.asm.Na).apply(null,arguments)},tn._OrtAppendExecutionProvider=function(){return(tn._OrtAppendExecutionProvider=tn.asm.Oa).apply(null,arguments)},tn._OrtAddSessionConfigEntry=function(){return(tn._OrtAddSessionConfigEntry=tn.asm.Pa).apply(null,arguments)},tn._OrtReleaseSessionOptions=function(){return(tn._OrtReleaseSessionOptions=tn.asm.Qa).apply(null,arguments)},tn._OrtCreateSession=function(){return(tn._OrtCreateSession=tn.asm.Ra).apply(null,arguments)},tn._OrtReleaseSession=function(){return(tn._OrtReleaseSession=tn.asm.Sa).apply(null,arguments)},tn._OrtGetInputCount=function(){return(tn._OrtGetInputCount=tn.asm.Ta).apply(null,arguments)},tn._OrtGetOutputCount=function(){return(tn._OrtGetOutputCount=tn.asm.Ua).apply(null,arguments)},tn._OrtGetInputName=function(){return(tn._OrtGetInputName=tn.asm.Va).apply(null,arguments)},tn._OrtGetOutputName=function(){return(tn._OrtGetOutputName=tn.asm.Wa).apply(null,arguments)},tn._OrtFree=function(){return(tn._OrtFree=tn.asm.Xa).apply(null,arguments)},tn._OrtCreateTensor=function(){return(tn._OrtCreateTensor=tn.asm.Ya).apply(null,arguments)},tn._OrtGetTensorData=function(){return(tn._OrtGetTensorData=tn.asm.Za).apply(null,arguments)},tn._OrtReleaseTensor=function(){return(tn._OrtReleaseTensor=tn.asm._a).apply(null,arguments)},tn._OrtCreateRunOptions=function(){return(tn._OrtCreateRunOptions=tn.asm.$a).apply(null,arguments)},tn._OrtAddRunConfigEntry=function(){return(tn._OrtAddRunConfigEntry=tn.asm.ab).apply(null,arguments)},tn._OrtReleaseRunOptions=function(){return(tn._OrtReleaseRunOptions=tn.asm.bb).apply(null,arguments)},tn._OrtRun=function(){return(tn._OrtRun=tn.asm.cb).apply(null,arguments)},tn._OrtEndProfiling=function(){return(tn._OrtEndProfiling=tn.asm.db).apply(null,arguments)};var ef,ed=tn._malloc=function(){return(ed=tn._malloc=tn.asm.eb).apply(null,arguments)},eh=tn._free=function(){return(eh=tn._free=tn.asm.fb).apply(null,arguments)},eg=tn._fflush=function(){return(eg=tn._fflush=tn.asm.gb).apply(null,arguments)},eb=tn.___funcs_on_exit=function(){return(eb=tn.___funcs_on_exit=tn.asm.hb).apply(null,arguments)},em=tn._setThrew=function(){return(em=tn._setThrew=tn.asm.jb).apply(null,arguments)},ey=tn.stackSave=function(){return(ey=tn.stackSave=tn.asm.kb).apply(null,arguments)},e_=tn.stackRestore=function(){return(e_=tn.stackRestore=tn.asm.lb).apply(null,arguments)},ev=tn.stackAlloc=function(){return(ev=tn.stackAlloc=tn.asm.mb).apply(null,arguments)},ex=tn.___cxa_can_catch=function(){return(ex=tn.___cxa_can_catch=tn.asm.nb).apply(null,arguments)},ew=tn.___cxa_is_pointer_type=function(){return(ew=tn.___cxa_is_pointer_type=tn.asm.ob).apply(null,arguments)},eT=tn.dynCall_j=function(){return(eT=tn.dynCall_j=tn.asm.pb).apply(null,arguments)},eS=tn.dynCall_iiiiij=function(){return(eS=tn.dynCall_iiiiij=tn.asm.qb).apply(null,arguments)},eO=tn.dynCall_jii=function(){return(eO=tn.dynCall_jii=tn.asm.rb).apply(null,arguments)},eA=tn.dynCall_viiiiij=function(){return(eA=tn.dynCall_viiiiij=tn.asm.sb).apply(null,arguments)},eE=tn.dynCall_vjji=function(){return(eE=tn.dynCall_vjji=tn.asm.tb).apply(null,arguments)},eI=tn.dynCall_viiijjjii=function(){return(eI=tn.dynCall_viiijjjii=tn.asm.ub).apply(null,arguments)},eP=tn.dynCall_iij=function(){return(eP=tn.dynCall_iij=tn.asm.vb).apply(null,arguments)},eD=tn.dynCall_ji=function(){return(eD=tn.dynCall_ji=tn.asm.wb).apply(null,arguments)},e$=tn.dynCall_iiiiiij=function(){return(e$=tn.dynCall_iiiiiij=tn.asm.xb).apply(null,arguments)},ek=tn.dynCall_iiij=function(){return(ek=tn.dynCall_iiij=tn.asm.yb).apply(null,arguments)};function eC(){function tr(){if(!ef&&(ef=!0,tn.calledRun=!0,!t$)){if(t0(tU),ta(tn),tn.onRuntimeInitialized&&tn.onRuntimeInitialized(),tn.postRun)for("function"==typeof tn.postRun&&(tn.postRun=[tn.postRun]);tn.postRun.length;){var tr=tn.postRun.shift();tB.unshift(tr)}t0(tB)}}if(!(0{"use strict";tr.exports=function(tr,tn){for(var ti=Array(arguments.length-1),to=0,ta=2,ts=!0;ta{"use strict";var ti=tn;ti.length=function(tr){var tn=tr.length;if(!tn)return 0;for(var ti=0;--tn%4>1&&"="===tr.charAt(tn);)++ti;return Math.ceil(3*tr.length)/4-ti};for(var to=Array(64),ta=Array(123),ts=0;ts<64;)ta[to[ts]=ts<26?ts+65:ts<52?ts+71:ts<62?ts-4:ts-59|43]=ts++;ti.encode=function(tr,tn,ti){for(var ta,ts=null,tu=[],tl=0,tc=0;tn>2],ta=(3&tp)<<4,tc=1;break;case 1:tu[tl++]=to[ta|tp>>4],ta=(15&tp)<<2,tc=2;break;case 2:tu[tl++]=to[ta|tp>>6],tu[tl++]=to[63&tp],tc=0}tl>8191&&((ts||(ts=[])).push(String.fromCharCode.apply(String,tu)),tl=0)}return tc&&(tu[tl++]=to[ta],tu[tl++]=61,1===tc&&(tu[tl++]=61)),ts?(tl&&ts.push(String.fromCharCode.apply(String,tu.slice(0,tl))),ts.join("")):String.fromCharCode.apply(String,tu.slice(0,tl))};var tu="invalid encoding";ti.decode=function(tr,tn,ti){for(var to,ts=ti,tl=0,tc=0;tc1)break;if(void 0===(tp=ta[tp]))throw Error(tu);switch(tl){case 0:to=tp,tl=1;break;case 1:tn[ti++]=to<<2|(48&tp)>>4,to=tp,tl=2;break;case 2:tn[ti++]=(15&to)<<4|(60&tp)>>2,to=tp,tl=3;break;case 3:tn[ti++]=(3&to)<<6|tp,tl=0}}if(1===tl)throw Error(tu);return ti-ts},ti.test=function(tr){return/^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$/.test(tr)}},9211:tr=>{"use strict";function tn(){this._listeners={}}tr.exports=tn,tn.prototype.on=function(tr,tn,ti){return(this._listeners[tr]||(this._listeners[tr]=[])).push({fn:tn,ctx:ti||this}),this},tn.prototype.off=function(tr,tn){if(void 0===tr)this._listeners={};else if(void 0===tn)this._listeners[tr]=[];else for(var ti=this._listeners[tr],to=0;to{"use strict";function tn(tr){return"undefined"!=typeof Float32Array?function(){var tn=new Float32Array([-0]),ti=new Uint8Array(tn.buffer),to=128===ti[3];function ta(tr,to,ta){tn[0]=tr,to[ta]=ti[0],to[ta+1]=ti[1],to[ta+2]=ti[2],to[ta+3]=ti[3]}function ts(tr,to,ta){tn[0]=tr,to[ta]=ti[3],to[ta+1]=ti[2],to[ta+2]=ti[1],to[ta+3]=ti[0]}function tu(tr,to){return ti[0]=tr[to],ti[1]=tr[to+1],ti[2]=tr[to+2],ti[3]=tr[to+3],tn[0]}function tl(tr,to){return ti[3]=tr[to],ti[2]=tr[to+1],ti[1]=tr[to+2],ti[0]=tr[to+3],tn[0]}tr.writeFloatLE=to?ta:ts,tr.writeFloatBE=to?ts:ta,tr.readFloatLE=to?tu:tl,tr.readFloatBE=to?tl:tu}():function(){function tn(tr,tn,ti,to){var ta=tn<0?1:0;if(ta&&(tn=-tn),0===tn)tr(1/tn>0?0:2147483648,ti,to);else if(isNaN(tn))tr(2143289344,ti,to);else if(tn>34028234663852886e22)tr((ta<<31|2139095040)>>>0,ti,to);else if(tn<11754943508222875e-54)tr((ta<<31|Math.round(tn/1401298464324817e-60))>>>0,ti,to);else{var ts=Math.floor(Math.log(tn)/Math.LN2);tr((ta<<31|ts+127<<23|8388607&Math.round(tn*Math.pow(2,-ts)*8388608))>>>0,ti,to)}}function tu(tr,tn,ti){var to=tr(tn,ti),ta=2*(to>>31)+1,ts=to>>>23&255,tu=8388607&to;return 255===ts?tu?NaN:ta*(1/0):0===ts?1401298464324817e-60*ta*tu:ta*Math.pow(2,ts-150)*(tu+8388608)}tr.writeFloatLE=tn.bind(null,ti),tr.writeFloatBE=tn.bind(null,to),tr.readFloatLE=tu.bind(null,ta),tr.readFloatBE=tu.bind(null,ts)}(),"undefined"!=typeof Float64Array?function(){var tn=new Float64Array([-0]),ti=new Uint8Array(tn.buffer),to=128===ti[7];function ta(tr,to,ta){tn[0]=tr,to[ta]=ti[0],to[ta+1]=ti[1],to[ta+2]=ti[2],to[ta+3]=ti[3],to[ta+4]=ti[4],to[ta+5]=ti[5],to[ta+6]=ti[6],to[ta+7]=ti[7]}function ts(tr,to,ta){tn[0]=tr,to[ta]=ti[7],to[ta+1]=ti[6],to[ta+2]=ti[5],to[ta+3]=ti[4],to[ta+4]=ti[3],to[ta+5]=ti[2],to[ta+6]=ti[1],to[ta+7]=ti[0]}function tu(tr,to){return ti[0]=tr[to],ti[1]=tr[to+1],ti[2]=tr[to+2],ti[3]=tr[to+3],ti[4]=tr[to+4],ti[5]=tr[to+5],ti[6]=tr[to+6],ti[7]=tr[to+7],tn[0]}function tl(tr,to){return ti[7]=tr[to],ti[6]=tr[to+1],ti[5]=tr[to+2],ti[4]=tr[to+3],ti[3]=tr[to+4],ti[2]=tr[to+5],ti[1]=tr[to+6],ti[0]=tr[to+7],tn[0]}tr.writeDoubleLE=to?ta:ts,tr.writeDoubleBE=to?ts:ta,tr.readDoubleLE=to?tu:tl,tr.readDoubleBE=to?tl:tu}():function(){function tn(tr,tn,ti,to,ta,ts){var tu,tl=to<0?1:0;if(tl&&(to=-to),0===to)tr(0,ta,ts+tn),tr(1/to>0?0:2147483648,ta,ts+ti);else if(isNaN(to))tr(0,ta,ts+tn),tr(2146959360,ta,ts+ti);else if(to>17976931348623157e292)tr(0,ta,ts+tn),tr((tl<<31|2146435072)>>>0,ta,ts+ti);else if(to<22250738585072014e-324)tr((tu=to/5e-324)>>>0,ta,ts+tn),tr((tl<<31|tu/4294967296)>>>0,ta,ts+ti);else{var tc=Math.floor(Math.log(to)/Math.LN2);1024===tc&&(tc=1023),tr(4503599627370496*(tu=to*Math.pow(2,-tc))>>>0,ta,ts+tn),tr((tl<<31|tc+1023<<20|1048576*tu&1048575)>>>0,ta,ts+ti)}}function tu(tr,tn,ti,to,ta){var ts=tr(to,ta+tn),tu=tr(to,ta+ti),tl=2*(tu>>31)+1,tc=tu>>>20&2047,tp=4294967296*(1048575&tu)+ts;return 2047===tc?tp?NaN:tl*(1/0):0===tc?5e-324*tl*tp:tl*Math.pow(2,tc-1075)*(tp+4503599627370496)}tr.writeDoubleLE=tn.bind(null,ti,0,4),tr.writeDoubleBE=tn.bind(null,to,4,0),tr.readDoubleLE=tu.bind(null,ta,0,4),tr.readDoubleBE=tu.bind(null,ts,4,0)}(),tr}function ti(tr,tn,ti){tn[ti]=255&tr,tn[ti+1]=tr>>>8&255,tn[ti+2]=tr>>>16&255,tn[ti+3]=tr>>>24}function to(tr,tn,ti){tn[ti]=tr>>>24,tn[ti+1]=tr>>>16&255,tn[ti+2]=tr>>>8&255,tn[ti+3]=255&tr}function ta(tr,tn){return(tr[tn]|tr[tn+1]<<8|tr[tn+2]<<16|tr[tn+3]<<24)>>>0}function ts(tr,tn){return(tr[tn]<<24|tr[tn+1]<<16|tr[tn+2]<<8|tr[tn+3])>>>0}tr.exports=tn(tn)},7199:module=>{"use strict";function inquire(moduleName){try{var mod=eval("quire".replace(/^/,"re"))(moduleName);if(mod&&(mod.length||Object.keys(mod).length))return mod}catch(t){}return null}module.exports=inquire},6662:tr=>{"use strict";tr.exports=function(tr,tn,ti){var to=ti||8192,ta=to>>>1,ts=null,tu=to;return function(ti){if(ti<1||ti>ta)return tr(ti);tu+ti>to&&(ts=tr(to),tu=0);var tl=tn.call(ts,tu,tu+=ti);return 7&tu&&(tu=1+(7|tu)),tl}}},4997:(tr,tn)=>{"use strict";var ti=tn;ti.length=function(tr){for(var tn=0,ti=0,to=0;to191&&to<224?ts[tu++]=(31&to)<<6|63&tr[tn++]:to>239&&to<365?(to=((7&to)<<18|(63&tr[tn++])<<12|(63&tr[tn++])<<6|63&tr[tn++])-65536,ts[tu++]=55296+(to>>10),ts[tu++]=56320+(1023&to)):ts[tu++]=(15&to)<<12|(63&tr[tn++])<<6|63&tr[tn++],tu>8191&&((ta||(ta=[])).push(String.fromCharCode.apply(String,ts)),tu=0);return ta?(tu&&ta.push(String.fromCharCode.apply(String,ts.slice(0,tu))),ta.join("")):String.fromCharCode.apply(String,ts.slice(0,tu))},ti.write=function(tr,tn,ti){for(var to,ta,ts=ti,tu=0;tu>6|192:(55296==(64512&to)&&56320==(64512&(ta=tr.charCodeAt(tu+1)))?(to=65536+((1023&to)<<10)+(1023&ta),++tu,tn[ti++]=to>>18|240,tn[ti++]=to>>12&63|128):tn[ti++]=to>>12|224,tn[ti++]=to>>6&63|128),tn[ti++]=63&to|128);return ti-ts}},3442:(tr,tn)=>{"use strict";tn.__esModule=!0;var ti=function(){function tr(tn){if(!tn)throw TypeError("Invalid argument; `value` has no value.");this.value=tr.EMPTY,tn&&tr.isGuid(tn)&&(this.value=tn)}return tr.isGuid=function(tn){var ti=tn.toString();return tn&&(tn instanceof tr||tr.validator.test(ti))},tr.create=function(){return new tr([tr.gen(2),tr.gen(1),tr.gen(1),tr.gen(1),tr.gen(3)].join("-"))},tr.createEmpty=function(){return new tr("emptyguid")},tr.parse=function(tn){return new tr(tn)},tr.raw=function(){return[tr.gen(2),tr.gen(1),tr.gen(1),tr.gen(1),tr.gen(3)].join("-")},tr.gen=function(tr){for(var tn="",ti=0;ti{tr.exports=ti;var tn=null;try{tn=new WebAssembly.Instance(new WebAssembly.Module(new Uint8Array([0,97,115,109,1,0,0,0,1,13,2,96,0,1,127,96,4,127,127,127,127,1,127,3,7,6,0,1,1,1,1,1,6,6,1,127,1,65,0,11,7,50,6,3,109,117,108,0,1,5,100,105,118,95,115,0,2,5,100,105,118,95,117,0,3,5,114,101,109,95,115,0,4,5,114,101,109,95,117,0,5,8,103,101,116,95,104,105,103,104,0,0,10,191,1,6,4,0,35,0,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,126,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,127,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,128,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,129,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,130,34,4,66,32,135,167,36,0,32,4,167,11])),{}).exports}catch(tr){}function ti(tr,tn,ti){this.low=0|tr,this.high=0|tn,this.unsigned=!!ti}function to(tr){return!0===(tr&&tr.__isLong__)}ti.prototype.__isLong__,Object.defineProperty(ti.prototype,"__isLong__",{value:!0}),ti.isLong=to;var ta={},ts={};function tu(tr,tn){var ti,to,tu;return tn?(tu=0<=(tr>>>=0)&&tr<256)&&(to=ts[tr])?to:(ti=tc(tr,(0|tr)<0?-1:0,!0),tu&&(ts[tr]=ti),ti):(tu=-128<=(tr|=0)&&tr<128)&&(to=ta[tr])?to:(ti=tc(tr,tr<0?-1:0,!1),tu&&(ta[tr]=ti),ti)}function tl(tr,tn){if(isNaN(tr))return tn?t_:ty;if(tn){if(tr<0)return t_;if(tr>=tg)return tS}else{if(tr<=-tb)return tO;if(tr+1>=tb)return tT}return tr<0?tl(-tr,tn).neg():tc(tr%th|0,tr/th|0,tn)}function tc(tr,tn,to){return new ti(tr,tn,to)}ti.fromInt=tu,ti.fromNumber=tl,ti.fromBits=tc;var tp=Math.pow;function tf(tr,tn,ti){if(0===tr.length)throw Error("empty string");if("NaN"===tr||"Infinity"===tr||"+Infinity"===tr||"-Infinity"===tr)return ty;if("number"==typeof tn?(ti=tn,tn=!1):tn=!!tn,(ti=ti||10)<2||360)throw Error("interior hyphen");if(0===to)return tf(tr.substring(1),tn,ti).neg();for(var to,ta=tl(tp(ti,8)),ts=ty,tu=0;tu>>0:this.low},tA.toNumber=function(){return this.unsigned?(this.high>>>0)*th+(this.low>>>0):this.high*th+(this.low>>>0)},tA.toString=function(tr){if((tr=tr||10)<2||36>>0).toString(tr);if((ts=tc).isZero())return tf+tu;for(;tf.length<6;)tf="0"+tf;tu=""+tf+tu}},tA.getHighBits=function(){return this.high},tA.getHighBitsUnsigned=function(){return this.high>>>0},tA.getLowBits=function(){return this.low},tA.getLowBitsUnsigned=function(){return this.low>>>0},tA.getNumBitsAbs=function(){if(this.isNegative())return this.eq(tO)?64:this.neg().getNumBitsAbs();for(var tr=0!=this.high?this.high:this.low,tn=31;tn>0&&0==(tr&1<=0},tA.isOdd=function(){return 1==(1&this.low)},tA.isEven=function(){return 0==(1&this.low)},tA.equals=function(tr){return to(tr)||(tr=td(tr)),(this.unsigned===tr.unsigned||this.high>>>31!=1||tr.high>>>31!=1)&&this.high===tr.high&&this.low===tr.low},tA.eq=tA.equals,tA.notEquals=function(tr){return!this.eq(tr)},tA.neq=tA.notEquals,tA.ne=tA.notEquals,tA.lessThan=function(tr){return 0>this.comp(tr)},tA.lt=tA.lessThan,tA.lessThanOrEqual=function(tr){return 0>=this.comp(tr)},tA.lte=tA.lessThanOrEqual,tA.le=tA.lessThanOrEqual,tA.greaterThan=function(tr){return this.comp(tr)>0},tA.gt=tA.greaterThan,tA.greaterThanOrEqual=function(tr){return this.comp(tr)>=0},tA.gte=tA.greaterThanOrEqual,tA.ge=tA.greaterThanOrEqual,tA.compare=function(tr){if(to(tr)||(tr=td(tr)),this.eq(tr))return 0;var tn=this.isNegative(),ti=tr.isNegative();return tn&&!ti?-1:!tn&&ti?1:this.unsigned?tr.high>>>0>this.high>>>0||tr.high===this.high&&tr.low>>>0>this.low>>>0?-1:1:this.sub(tr).isNegative()?-1:1},tA.comp=tA.compare,tA.negate=function(){return!this.unsigned&&this.eq(tO)?tO:this.not().add(tv)},tA.neg=tA.negate,tA.add=function(tr){to(tr)||(tr=td(tr));var tn,ti,ta,ts,tu=this.high>>>16,tl=65535&this.high,tp=this.low>>>16,tf=65535&this.low,th=tr.high>>>16,tg=65535&tr.high,tb=tr.low>>>16;return ts=0+(((ta=0+((tn=0+((ti=0+(tf+(65535&tr.low)))>>>16)+(tp+tb))>>>16)+(tl+tg))>>>16)+(tu+th)),tc((tn&=65535)<<16|(ti&=65535),(ts&=65535)<<16|(ta&=65535),this.unsigned)},tA.subtract=function(tr){return to(tr)||(tr=td(tr)),this.add(tr.neg())},tA.sub=tA.subtract,tA.multiply=function(tr){if(this.isZero())return ty;if(to(tr)||(tr=td(tr)),tn)return tc(tn.mul(this.low,this.high,tr.low,tr.high),tn.get_high(),this.unsigned);if(tr.isZero())return ty;if(this.eq(tO))return tr.isOdd()?tO:ty;if(tr.eq(tO))return this.isOdd()?tO:ty;if(this.isNegative())return tr.isNegative()?this.neg().mul(tr.neg()):this.neg().mul(tr).neg();if(tr.isNegative())return this.mul(tr.neg()).neg();if(this.lt(tm)&&tr.lt(tm))return tl(this.toNumber()*tr.toNumber(),this.unsigned);var ti,ta,ts,tu=this.high>>>16,tp=65535&this.high,tf=this.low>>>16,th=65535&this.low,tg=tr.high>>>16,tb=65535&tr.high,t_=tr.low>>>16,tv=65535&tr.low,tx=0;return ts=0+((ti=0+((ta=0+th*tv)>>>16)+tf*tv)>>>16),ti&=65535,ts+=(ti+=th*t_)>>>16,tx+=(ts+=tp*tv)>>>16,ts&=65535,tx+=(ts+=tf*t_)>>>16,ts&=65535,tx+=((ts+=th*tb)>>>16)+(tu*tv+tp*t_+tf*tb+th*tg),tc((ti&=65535)<<16|(ta&=65535),(tx&=65535)<<16|(ts&=65535),this.unsigned)},tA.mul=tA.multiply,tA.divide=function(tr){if(to(tr)||(tr=td(tr)),tr.isZero())throw Error("division by zero");if(tn)return this.unsigned||-2147483648!==this.high||-1!==tr.low||-1!==tr.high?tc((this.unsigned?tn.div_u:tn.div_s)(this.low,this.high,tr.low,tr.high),tn.get_high(),this.unsigned):this;if(this.isZero())return this.unsigned?t_:ty;if(this.unsigned){if(tr.unsigned||(tr=tr.toUnsigned()),tr.gt(this))return t_;if(tr.gt(this.shru(1)))return tx;ts=t_}else{if(this.eq(tO))return tr.eq(tv)||tr.eq(tw)?tO:tr.eq(tO)?tv:(ti=this.shr(1).div(tr).shl(1)).eq(ty)?tr.isNegative()?tv:tw:(ta=this.sub(tr.mul(ti)),ts=ti.add(ta.div(tr)));if(tr.eq(tO))return this.unsigned?t_:ty;if(this.isNegative())return tr.isNegative()?this.neg().div(tr.neg()):this.neg().div(tr).neg();if(tr.isNegative())return this.div(tr.neg()).neg();ts=ty}for(ta=this;ta.gte(tr);){ti=Math.max(1,Math.floor(ta.toNumber()/tr.toNumber()));for(var ti,ta,ts,tu=Math.ceil(Math.log(ti)/Math.LN2),tf=tu<=48?1:tp(2,tu-48),th=tl(ti),tg=th.mul(tr);tg.isNegative()||tg.gt(ta);)tg=(th=tl(ti-=tf,this.unsigned)).mul(tr);th.isZero()&&(th=tv),ts=ts.add(th),ta=ta.sub(tg)}return ts},tA.div=tA.divide,tA.modulo=function(tr){return to(tr)||(tr=td(tr)),tn?tc((this.unsigned?tn.rem_u:tn.rem_s)(this.low,this.high,tr.low,tr.high),tn.get_high(),this.unsigned):this.sub(this.div(tr).mul(tr))},tA.mod=tA.modulo,tA.rem=tA.modulo,tA.not=function(){return tc(~this.low,~this.high,this.unsigned)},tA.and=function(tr){return to(tr)||(tr=td(tr)),tc(this.low&tr.low,this.high&tr.high,this.unsigned)},tA.or=function(tr){return to(tr)||(tr=td(tr)),tc(this.low|tr.low,this.high|tr.high,this.unsigned)},tA.xor=function(tr){return to(tr)||(tr=td(tr)),tc(this.low^tr.low,this.high^tr.high,this.unsigned)},tA.shiftLeft=function(tr){return to(tr)&&(tr=tr.toInt()),0==(tr&=63)?this:tr<32?tc(this.low<>>32-tr,this.unsigned):tc(0,this.low<>>tr|this.high<<32-tr,this.high>>tr,this.unsigned):tc(this.high>>tr-32,this.high>=0?0:-1,this.unsigned)},tA.shr=tA.shiftRight,tA.shiftRightUnsigned=function(tr){if(to(tr)&&(tr=tr.toInt()),0==(tr&=63))return this;var tn=this.high;return tr<32?tc(this.low>>>tr|tn<<32-tr,tn>>>tr,this.unsigned):tc(32===tr?tn:tn>>>tr-32,0,this.unsigned)},tA.shru=tA.shiftRightUnsigned,tA.shr_u=tA.shiftRightUnsigned,tA.toSigned=function(){return this.unsigned?tc(this.low,this.high,!1):this},tA.toUnsigned=function(){return this.unsigned?this:tc(this.low,this.high,!0)},tA.toBytes=function(tr){return tr?this.toBytesLE():this.toBytesBE()},tA.toBytesLE=function(){var tr=this.high,tn=this.low;return[255&tn,tn>>>8&255,tn>>>16&255,tn>>>24,255&tr,tr>>>8&255,tr>>>16&255,tr>>>24]},tA.toBytesBE=function(){var tr=this.high,tn=this.low;return[tr>>>24,tr>>>16&255,tr>>>8&255,255&tr,tn>>>24,tn>>>16&255,tn>>>8&255,255&tn]},ti.fromBytes=function(tr,tn,to){return to?ti.fromBytesLE(tr,tn):ti.fromBytesBE(tr,tn)},ti.fromBytesLE=function(tr,tn){return new ti(tr[0]|tr[1]<<8|tr[2]<<16|tr[3]<<24,tr[4]|tr[5]<<8|tr[6]<<16|tr[7]<<24,tn)},ti.fromBytesBE=function(tr,tn){return new ti(tr[4]<<24|tr[5]<<16|tr[6]<<8|tr[7],tr[0]<<24|tr[1]<<16|tr[2]<<8|tr[3],tn)}},1446:(tr,tn,ti)=>{"use strict";var to,ta,ts,tu=ti(2100),tl=tu.Reader,tc=tu.Writer,tp=tu.util,tf=tu.roots.default||(tu.roots.default={});tf.onnx=((ts={}).Version=((ta=Object.create(to={}))[to[0]="_START_VERSION"]=0,ta[to[1]="IR_VERSION_2017_10_10"]=1,ta[to[2]="IR_VERSION_2017_10_30"]=2,ta[to[3]="IR_VERSION_2017_11_3"]=3,ta[to[4]="IR_VERSION_2019_1_22"]=4,ta[to[5]="IR_VERSION"]=5,ta),ts.AttributeProto=function(){function tr(tr){if(this.floats=[],this.ints=[],this.strings=[],this.tensors=[],this.graphs=[],tr)for(var tn=Object.keys(tr),ti=0;ti>>3){case 1:to.name=tr.string();break;case 21:to.refAttrName=tr.string();break;case 13:to.docString=tr.string();break;case 20:to.type=tr.int32();break;case 2:to.f=tr.float();break;case 3:to.i=tr.int64();break;case 4:to.s=tr.bytes();break;case 5:to.t=tf.onnx.TensorProto.decode(tr,tr.uint32());break;case 6:to.g=tf.onnx.GraphProto.decode(tr,tr.uint32());break;case 7:if(to.floats&&to.floats.length||(to.floats=[]),2==(7&ta))for(var ts=tr.uint32()+tr.pos;tr.pos>>0,tr.i.high>>>0).toNumber())),null!=tr.s&&("string"==typeof tr.s?tp.base64.decode(tr.s,tn.s=tp.newBuffer(tp.base64.length(tr.s)),0):tr.s.length&&(tn.s=tr.s)),null!=tr.t){if("object"!=typeof tr.t)throw TypeError(".onnx.AttributeProto.t: object expected");tn.t=tf.onnx.TensorProto.fromObject(tr.t)}if(null!=tr.g){if("object"!=typeof tr.g)throw TypeError(".onnx.AttributeProto.g: object expected");tn.g=tf.onnx.GraphProto.fromObject(tr.g)}if(tr.floats){if(!Array.isArray(tr.floats))throw TypeError(".onnx.AttributeProto.floats: array expected");tn.floats=[];for(var ti=0;ti>>0,tr.ints[ti].high>>>0).toNumber())}if(tr.strings){if(!Array.isArray(tr.strings))throw TypeError(".onnx.AttributeProto.strings: array expected");for(tn.strings=[],ti=0;ti>>0,tr.i.high>>>0).toNumber():tr.i),null!=tr.s&&tr.hasOwnProperty("s")&&(ti.s=tn.bytes===String?tp.base64.encode(tr.s,0,tr.s.length):tn.bytes===Array?Array.prototype.slice.call(tr.s):tr.s),null!=tr.t&&tr.hasOwnProperty("t")&&(ti.t=tf.onnx.TensorProto.toObject(tr.t,tn)),null!=tr.g&&tr.hasOwnProperty("g")&&(ti.g=tf.onnx.GraphProto.toObject(tr.g,tn)),tr.floats&&tr.floats.length){ti.floats=[];for(var ta=0;ta>>0,tr.ints[ta].high>>>0).toNumber():tr.ints[ta];if(tr.strings&&tr.strings.length)for(ti.strings=[],ta=0;ta>>3){case 1:to.name=tr.string();break;case 2:to.type=tf.onnx.TypeProto.decode(tr,tr.uint32());break;case 3:to.docString=tr.string();break;default:tr.skipType(7&ta)}}return to},tr.decodeDelimited=function(tr){return tr instanceof tl||(tr=new tl(tr)),this.decode(tr,tr.uint32())},tr.verify=function(tr){if("object"!=typeof tr||null===tr)return"object expected";if(null!=tr.name&&tr.hasOwnProperty("name")&&!tp.isString(tr.name))return"name: string expected";if(null!=tr.type&&tr.hasOwnProperty("type")){var tn=tf.onnx.TypeProto.verify(tr.type);if(tn)return"type."+tn}return null!=tr.docString&&tr.hasOwnProperty("docString")&&!tp.isString(tr.docString)?"docString: string expected":null},tr.fromObject=function(tr){if(tr instanceof tf.onnx.ValueInfoProto)return tr;var tn=new tf.onnx.ValueInfoProto;if(null!=tr.name&&(tn.name=String(tr.name)),null!=tr.type){if("object"!=typeof tr.type)throw TypeError(".onnx.ValueInfoProto.type: object expected");tn.type=tf.onnx.TypeProto.fromObject(tr.type)}return null!=tr.docString&&(tn.docString=String(tr.docString)),tn},tr.toObject=function(tr,tn){tn||(tn={});var ti={};return tn.defaults&&(ti.name="",ti.type=null,ti.docString=""),null!=tr.name&&tr.hasOwnProperty("name")&&(ti.name=tr.name),null!=tr.type&&tr.hasOwnProperty("type")&&(ti.type=tf.onnx.TypeProto.toObject(tr.type,tn)),null!=tr.docString&&tr.hasOwnProperty("docString")&&(ti.docString=tr.docString),ti},tr.prototype.toJSON=function(){return this.constructor.toObject(this,tu.util.toJSONOptions)},tr}(),ts.NodeProto=function(){function tr(tr){if(this.input=[],this.output=[],this.attribute=[],tr)for(var tn=Object.keys(tr),ti=0;ti>>3){case 1:to.input&&to.input.length||(to.input=[]),to.input.push(tr.string());break;case 2:to.output&&to.output.length||(to.output=[]),to.output.push(tr.string());break;case 3:to.name=tr.string();break;case 4:to.opType=tr.string();break;case 7:to.domain=tr.string();break;case 5:to.attribute&&to.attribute.length||(to.attribute=[]),to.attribute.push(tf.onnx.AttributeProto.decode(tr,tr.uint32()));break;case 6:to.docString=tr.string();break;default:tr.skipType(7&ta)}}return to},tr.decodeDelimited=function(tr){return tr instanceof tl||(tr=new tl(tr)),this.decode(tr,tr.uint32())},tr.verify=function(tr){if("object"!=typeof tr||null===tr)return"object expected";if(null!=tr.input&&tr.hasOwnProperty("input")){if(!Array.isArray(tr.input))return"input: array expected";for(var tn=0;tn>>3){case 1:to.irVersion=tr.int64();break;case 8:to.opsetImport&&to.opsetImport.length||(to.opsetImport=[]),to.opsetImport.push(tf.onnx.OperatorSetIdProto.decode(tr,tr.uint32()));break;case 2:to.producerName=tr.string();break;case 3:to.producerVersion=tr.string();break;case 4:to.domain=tr.string();break;case 5:to.modelVersion=tr.int64();break;case 6:to.docString=tr.string();break;case 7:to.graph=tf.onnx.GraphProto.decode(tr,tr.uint32());break;case 14:to.metadataProps&&to.metadataProps.length||(to.metadataProps=[]),to.metadataProps.push(tf.onnx.StringStringEntryProto.decode(tr,tr.uint32()));break;default:tr.skipType(7&ta)}}return to},tr.decodeDelimited=function(tr){return tr instanceof tl||(tr=new tl(tr)),this.decode(tr,tr.uint32())},tr.verify=function(tr){if("object"!=typeof tr||null===tr)return"object expected";if(null!=tr.irVersion&&tr.hasOwnProperty("irVersion")&&!(tp.isInteger(tr.irVersion)||tr.irVersion&&tp.isInteger(tr.irVersion.low)&&tp.isInteger(tr.irVersion.high)))return"irVersion: integer|Long expected";if(null!=tr.opsetImport&&tr.hasOwnProperty("opsetImport")){if(!Array.isArray(tr.opsetImport))return"opsetImport: array expected";for(var tn,ti=0;ti>>0,tr.irVersion.high>>>0).toNumber())),tr.opsetImport){if(!Array.isArray(tr.opsetImport))throw TypeError(".onnx.ModelProto.opsetImport: array expected");tn.opsetImport=[];for(var ti=0;ti>>0,tr.modelVersion.high>>>0).toNumber())),null!=tr.docString&&(tn.docString=String(tr.docString)),null!=tr.graph){if("object"!=typeof tr.graph)throw TypeError(".onnx.ModelProto.graph: object expected");tn.graph=tf.onnx.GraphProto.fromObject(tr.graph)}if(tr.metadataProps){if(!Array.isArray(tr.metadataProps))throw TypeError(".onnx.ModelProto.metadataProps: array expected");for(tn.metadataProps=[],ti=0;ti>>0,tr.irVersion.high>>>0).toNumber():tr.irVersion),null!=tr.producerName&&tr.hasOwnProperty("producerName")&&(ti.producerName=tr.producerName),null!=tr.producerVersion&&tr.hasOwnProperty("producerVersion")&&(ti.producerVersion=tr.producerVersion),null!=tr.domain&&tr.hasOwnProperty("domain")&&(ti.domain=tr.domain),null!=tr.modelVersion&&tr.hasOwnProperty("modelVersion")&&("number"==typeof tr.modelVersion?ti.modelVersion=tn.longs===String?String(tr.modelVersion):tr.modelVersion:ti.modelVersion=tn.longs===String?tp.Long.prototype.toString.call(tr.modelVersion):tn.longs===Number?new tp.LongBits(tr.modelVersion.low>>>0,tr.modelVersion.high>>>0).toNumber():tr.modelVersion),null!=tr.docString&&tr.hasOwnProperty("docString")&&(ti.docString=tr.docString),null!=tr.graph&&tr.hasOwnProperty("graph")&&(ti.graph=tf.onnx.GraphProto.toObject(tr.graph,tn)),tr.opsetImport&&tr.opsetImport.length){ti.opsetImport=[];for(var ta=0;ta>>3){case 1:to.key=tr.string();break;case 2:to.value=tr.string();break;default:tr.skipType(7&ta)}}return to},tr.decodeDelimited=function(tr){return tr instanceof tl||(tr=new tl(tr)),this.decode(tr,tr.uint32())},tr.verify=function(tr){return"object"!=typeof tr||null===tr?"object expected":null!=tr.key&&tr.hasOwnProperty("key")&&!tp.isString(tr.key)?"key: string expected":null!=tr.value&&tr.hasOwnProperty("value")&&!tp.isString(tr.value)?"value: string expected":null},tr.fromObject=function(tr){if(tr instanceof tf.onnx.StringStringEntryProto)return tr;var tn=new tf.onnx.StringStringEntryProto;return null!=tr.key&&(tn.key=String(tr.key)),null!=tr.value&&(tn.value=String(tr.value)),tn},tr.toObject=function(tr,tn){tn||(tn={});var ti={};return tn.defaults&&(ti.key="",ti.value=""),null!=tr.key&&tr.hasOwnProperty("key")&&(ti.key=tr.key),null!=tr.value&&tr.hasOwnProperty("value")&&(ti.value=tr.value),ti},tr.prototype.toJSON=function(){return this.constructor.toObject(this,tu.util.toJSONOptions)},tr}(),ts.TensorAnnotation=function(){function tr(tr){if(this.quantParameterTensorNames=[],tr)for(var tn=Object.keys(tr),ti=0;ti>>3){case 1:to.tensorName=tr.string();break;case 2:to.quantParameterTensorNames&&to.quantParameterTensorNames.length||(to.quantParameterTensorNames=[]),to.quantParameterTensorNames.push(tf.onnx.StringStringEntryProto.decode(tr,tr.uint32()));break;default:tr.skipType(7&ta)}}return to},tr.decodeDelimited=function(tr){return tr instanceof tl||(tr=new tl(tr)),this.decode(tr,tr.uint32())},tr.verify=function(tr){if("object"!=typeof tr||null===tr)return"object expected";if(null!=tr.tensorName&&tr.hasOwnProperty("tensorName")&&!tp.isString(tr.tensorName))return"tensorName: string expected";if(null!=tr.quantParameterTensorNames&&tr.hasOwnProperty("quantParameterTensorNames")){if(!Array.isArray(tr.quantParameterTensorNames))return"quantParameterTensorNames: array expected";for(var tn=0;tn>>3){case 1:to.node&&to.node.length||(to.node=[]),to.node.push(tf.onnx.NodeProto.decode(tr,tr.uint32()));break;case 2:to.name=tr.string();break;case 5:to.initializer&&to.initializer.length||(to.initializer=[]),to.initializer.push(tf.onnx.TensorProto.decode(tr,tr.uint32()));break;case 10:to.docString=tr.string();break;case 11:to.input&&to.input.length||(to.input=[]),to.input.push(tf.onnx.ValueInfoProto.decode(tr,tr.uint32()));break;case 12:to.output&&to.output.length||(to.output=[]),to.output.push(tf.onnx.ValueInfoProto.decode(tr,tr.uint32()));break;case 13:to.valueInfo&&to.valueInfo.length||(to.valueInfo=[]),to.valueInfo.push(tf.onnx.ValueInfoProto.decode(tr,tr.uint32()));break;case 14:to.quantizationAnnotation&&to.quantizationAnnotation.length||(to.quantizationAnnotation=[]),to.quantizationAnnotation.push(tf.onnx.TensorAnnotation.decode(tr,tr.uint32()));break;default:tr.skipType(7&ta)}}return to},tr.decodeDelimited=function(tr){return tr instanceof tl||(tr=new tl(tr)),this.decode(tr,tr.uint32())},tr.verify=function(tr){if("object"!=typeof tr||null===tr)return"object expected";if(null!=tr.node&&tr.hasOwnProperty("node")){if(!Array.isArray(tr.node))return"node: array expected";for(var tn,ti=0;ti>>3){case 1:if(to.dims&&to.dims.length||(to.dims=[]),2==(7&ta))for(var ts=tr.uint32()+tr.pos;tr.pos>>0,tr.dims[ti].high>>>0).toNumber())}if(null!=tr.dataType&&(tn.dataType=0|tr.dataType),null!=tr.segment){if("object"!=typeof tr.segment)throw TypeError(".onnx.TensorProto.segment: object expected");tn.segment=tf.onnx.TensorProto.Segment.fromObject(tr.segment)}if(tr.floatData){if(!Array.isArray(tr.floatData))throw TypeError(".onnx.TensorProto.floatData: array expected");for(tn.floatData=[],ti=0;ti>>0,tr.int64Data[ti].high>>>0).toNumber())}if(null!=tr.name&&(tn.name=String(tr.name)),null!=tr.docString&&(tn.docString=String(tr.docString)),null!=tr.rawData&&("string"==typeof tr.rawData?tp.base64.decode(tr.rawData,tn.rawData=tp.newBuffer(tp.base64.length(tr.rawData)),0):tr.rawData.length&&(tn.rawData=tr.rawData)),tr.externalData){if(!Array.isArray(tr.externalData))throw TypeError(".onnx.TensorProto.externalData: array expected");for(tn.externalData=[],ti=0;ti>>0,tr.uint64Data[ti].high>>>0).toNumber(!0))}return tn},tr.toObject=function(tr,tn){tn||(tn={});var ti={};if((tn.arrays||tn.defaults)&&(ti.dims=[],ti.floatData=[],ti.int32Data=[],ti.stringData=[],ti.int64Data=[],ti.doubleData=[],ti.uint64Data=[],ti.externalData=[]),tn.defaults&&(ti.dataType=0,ti.segment=null,ti.name="",tn.bytes===String?ti.rawData="":(ti.rawData=[],tn.bytes!==Array&&(ti.rawData=tp.newBuffer(ti.rawData))),ti.docString="",ti.dataLocation=tn.enums===String?"DEFAULT":0),tr.dims&&tr.dims.length){ti.dims=[];for(var to=0;to>>0,tr.dims[to].high>>>0).toNumber():tr.dims[to]}if(null!=tr.dataType&&tr.hasOwnProperty("dataType")&&(ti.dataType=tr.dataType),null!=tr.segment&&tr.hasOwnProperty("segment")&&(ti.segment=tf.onnx.TensorProto.Segment.toObject(tr.segment,tn)),tr.floatData&&tr.floatData.length)for(ti.floatData=[],to=0;to>>0,tr.int64Data[to].high>>>0).toNumber():tr.int64Data[to];if(null!=tr.name&&tr.hasOwnProperty("name")&&(ti.name=tr.name),null!=tr.rawData&&tr.hasOwnProperty("rawData")&&(ti.rawData=tn.bytes===String?tp.base64.encode(tr.rawData,0,tr.rawData.length):tn.bytes===Array?Array.prototype.slice.call(tr.rawData):tr.rawData),tr.doubleData&&tr.doubleData.length)for(ti.doubleData=[],to=0;to>>0,tr.uint64Data[to].high>>>0).toNumber(!0):tr.uint64Data[to];if(null!=tr.docString&&tr.hasOwnProperty("docString")&&(ti.docString=tr.docString),tr.externalData&&tr.externalData.length)for(ti.externalData=[],to=0;to>>3){case 1:to.begin=tr.int64();break;case 2:to.end=tr.int64();break;default:tr.skipType(7&ta)}}return to},tr.decodeDelimited=function(tr){return tr instanceof tl||(tr=new tl(tr)),this.decode(tr,tr.uint32())},tr.verify=function(tr){return"object"!=typeof tr||null===tr?"object expected":null!=tr.begin&&tr.hasOwnProperty("begin")&&!(tp.isInteger(tr.begin)||tr.begin&&tp.isInteger(tr.begin.low)&&tp.isInteger(tr.begin.high))?"begin: integer|Long expected":null!=tr.end&&tr.hasOwnProperty("end")&&!(tp.isInteger(tr.end)||tr.end&&tp.isInteger(tr.end.low)&&tp.isInteger(tr.end.high))?"end: integer|Long expected":null},tr.fromObject=function(tr){if(tr instanceof tf.onnx.TensorProto.Segment)return tr;var tn=new tf.onnx.TensorProto.Segment;return null!=tr.begin&&(tp.Long?(tn.begin=tp.Long.fromValue(tr.begin)).unsigned=!1:"string"==typeof tr.begin?tn.begin=parseInt(tr.begin,10):"number"==typeof tr.begin?tn.begin=tr.begin:"object"==typeof tr.begin&&(tn.begin=new tp.LongBits(tr.begin.low>>>0,tr.begin.high>>>0).toNumber())),null!=tr.end&&(tp.Long?(tn.end=tp.Long.fromValue(tr.end)).unsigned=!1:"string"==typeof tr.end?tn.end=parseInt(tr.end,10):"number"==typeof tr.end?tn.end=tr.end:"object"==typeof tr.end&&(tn.end=new tp.LongBits(tr.end.low>>>0,tr.end.high>>>0).toNumber())),tn},tr.toObject=function(tr,tn){tn||(tn={});var ti={};if(tn.defaults){if(tp.Long){var to=new tp.Long(0,0,!1);ti.begin=tn.longs===String?to.toString():tn.longs===Number?to.toNumber():to}else ti.begin=tn.longs===String?"0":0;tp.Long?(to=new tp.Long(0,0,!1),ti.end=tn.longs===String?to.toString():tn.longs===Number?to.toNumber():to):ti.end=tn.longs===String?"0":0}return null!=tr.begin&&tr.hasOwnProperty("begin")&&("number"==typeof tr.begin?ti.begin=tn.longs===String?String(tr.begin):tr.begin:ti.begin=tn.longs===String?tp.Long.prototype.toString.call(tr.begin):tn.longs===Number?new tp.LongBits(tr.begin.low>>>0,tr.begin.high>>>0).toNumber():tr.begin),null!=tr.end&&tr.hasOwnProperty("end")&&("number"==typeof tr.end?ti.end=tn.longs===String?String(tr.end):tr.end:ti.end=tn.longs===String?tp.Long.prototype.toString.call(tr.end):tn.longs===Number?new tp.LongBits(tr.end.low>>>0,tr.end.high>>>0).toNumber():tr.end),ti},tr.prototype.toJSON=function(){return this.constructor.toObject(this,tu.util.toJSONOptions)},tr}(),tr.DataLocation=function(){var tr={},tn=Object.create(tr);return tn[tr[0]="DEFAULT"]=0,tn[tr[1]="EXTERNAL"]=1,tn}(),tr}(),ts.TensorShapeProto=function(){function tr(tr){if(this.dim=[],tr)for(var tn=Object.keys(tr),ti=0;ti>>3==1?(to.dim&&to.dim.length||(to.dim=[]),to.dim.push(tf.onnx.TensorShapeProto.Dimension.decode(tr,tr.uint32()))):tr.skipType(7&ta)}return to},tr.decodeDelimited=function(tr){return tr instanceof tl||(tr=new tl(tr)),this.decode(tr,tr.uint32())},tr.verify=function(tr){if("object"!=typeof tr||null===tr)return"object expected";if(null!=tr.dim&&tr.hasOwnProperty("dim")){if(!Array.isArray(tr.dim))return"dim: array expected";for(var tn=0;tn>>3){case 1:to.dimValue=tr.int64();break;case 2:to.dimParam=tr.string();break;case 3:to.denotation=tr.string();break;default:tr.skipType(7&ta)}}return to},tn.decodeDelimited=function(tr){return tr instanceof tl||(tr=new tl(tr)),this.decode(tr,tr.uint32())},tn.verify=function(tr){if("object"!=typeof tr||null===tr)return"object expected";var tn={};if(null!=tr.dimValue&&tr.hasOwnProperty("dimValue")&&(tn.value=1,!(tp.isInteger(tr.dimValue)||tr.dimValue&&tp.isInteger(tr.dimValue.low)&&tp.isInteger(tr.dimValue.high))))return"dimValue: integer|Long expected";if(null!=tr.dimParam&&tr.hasOwnProperty("dimParam")){if(1===tn.value)return"value: multiple values";if(tn.value=1,!tp.isString(tr.dimParam))return"dimParam: string expected"}return null!=tr.denotation&&tr.hasOwnProperty("denotation")&&!tp.isString(tr.denotation)?"denotation: string expected":null},tn.fromObject=function(tr){if(tr instanceof tf.onnx.TensorShapeProto.Dimension)return tr;var tn=new tf.onnx.TensorShapeProto.Dimension;return null!=tr.dimValue&&(tp.Long?(tn.dimValue=tp.Long.fromValue(tr.dimValue)).unsigned=!1:"string"==typeof tr.dimValue?tn.dimValue=parseInt(tr.dimValue,10):"number"==typeof tr.dimValue?tn.dimValue=tr.dimValue:"object"==typeof tr.dimValue&&(tn.dimValue=new tp.LongBits(tr.dimValue.low>>>0,tr.dimValue.high>>>0).toNumber())),null!=tr.dimParam&&(tn.dimParam=String(tr.dimParam)),null!=tr.denotation&&(tn.denotation=String(tr.denotation)),tn},tn.toObject=function(tr,tn){tn||(tn={});var ti={};return tn.defaults&&(ti.denotation=""),null!=tr.dimValue&&tr.hasOwnProperty("dimValue")&&("number"==typeof tr.dimValue?ti.dimValue=tn.longs===String?String(tr.dimValue):tr.dimValue:ti.dimValue=tn.longs===String?tp.Long.prototype.toString.call(tr.dimValue):tn.longs===Number?new tp.LongBits(tr.dimValue.low>>>0,tr.dimValue.high>>>0).toNumber():tr.dimValue,tn.oneofs&&(ti.value="dimValue")),null!=tr.dimParam&&tr.hasOwnProperty("dimParam")&&(ti.dimParam=tr.dimParam,tn.oneofs&&(ti.value="dimParam")),null!=tr.denotation&&tr.hasOwnProperty("denotation")&&(ti.denotation=tr.denotation),ti},tn.prototype.toJSON=function(){return this.constructor.toObject(this,tu.util.toJSONOptions)},tn}(),tr}(),ts.TypeProto=function(){var tr;function tn(tr){if(tr)for(var tn=Object.keys(tr),ti=0;ti>>3){case 1:to.tensorType=tf.onnx.TypeProto.Tensor.decode(tr,tr.uint32());break;case 6:to.denotation=tr.string();break;default:tr.skipType(7&ta)}}return to},tn.decodeDelimited=function(tr){return tr instanceof tl||(tr=new tl(tr)),this.decode(tr,tr.uint32())},tn.verify=function(tr){if("object"!=typeof tr||null===tr)return"object expected";if(null!=tr.tensorType&&tr.hasOwnProperty("tensorType")){var tn=tf.onnx.TypeProto.Tensor.verify(tr.tensorType);if(tn)return"tensorType."+tn}return null!=tr.denotation&&tr.hasOwnProperty("denotation")&&!tp.isString(tr.denotation)?"denotation: string expected":null},tn.fromObject=function(tr){if(tr instanceof tf.onnx.TypeProto)return tr;var tn=new tf.onnx.TypeProto;if(null!=tr.tensorType){if("object"!=typeof tr.tensorType)throw TypeError(".onnx.TypeProto.tensorType: object expected");tn.tensorType=tf.onnx.TypeProto.Tensor.fromObject(tr.tensorType)}return null!=tr.denotation&&(tn.denotation=String(tr.denotation)),tn},tn.toObject=function(tr,tn){tn||(tn={});var ti={};return tn.defaults&&(ti.denotation=""),null!=tr.tensorType&&tr.hasOwnProperty("tensorType")&&(ti.tensorType=tf.onnx.TypeProto.Tensor.toObject(tr.tensorType,tn),tn.oneofs&&(ti.value="tensorType")),null!=tr.denotation&&tr.hasOwnProperty("denotation")&&(ti.denotation=tr.denotation),ti},tn.prototype.toJSON=function(){return this.constructor.toObject(this,tu.util.toJSONOptions)},tn.Tensor=function(){function tr(tr){if(tr)for(var tn=Object.keys(tr),ti=0;ti>>3){case 1:to.elemType=tr.int32();break;case 2:to.shape=tf.onnx.TensorShapeProto.decode(tr,tr.uint32());break;default:tr.skipType(7&ta)}}return to},tr.decodeDelimited=function(tr){return tr instanceof tl||(tr=new tl(tr)),this.decode(tr,tr.uint32())},tr.verify=function(tr){if("object"!=typeof tr||null===tr)return"object expected";if(null!=tr.elemType&&tr.hasOwnProperty("elemType")&&!tp.isInteger(tr.elemType))return"elemType: integer expected";if(null!=tr.shape&&tr.hasOwnProperty("shape")){var tn=tf.onnx.TensorShapeProto.verify(tr.shape);if(tn)return"shape."+tn}return null},tr.fromObject=function(tr){if(tr instanceof tf.onnx.TypeProto.Tensor)return tr;var tn=new tf.onnx.TypeProto.Tensor;if(null!=tr.elemType&&(tn.elemType=0|tr.elemType),null!=tr.shape){if("object"!=typeof tr.shape)throw TypeError(".onnx.TypeProto.Tensor.shape: object expected");tn.shape=tf.onnx.TensorShapeProto.fromObject(tr.shape)}return tn},tr.toObject=function(tr,tn){tn||(tn={});var ti={};return tn.defaults&&(ti.elemType=0,ti.shape=null),null!=tr.elemType&&tr.hasOwnProperty("elemType")&&(ti.elemType=tr.elemType),null!=tr.shape&&tr.hasOwnProperty("shape")&&(ti.shape=tf.onnx.TensorShapeProto.toObject(tr.shape,tn)),ti},tr.prototype.toJSON=function(){return this.constructor.toObject(this,tu.util.toJSONOptions)},tr}(),tn}(),ts.OperatorSetIdProto=function(){function tr(tr){if(tr)for(var tn=Object.keys(tr),ti=0;ti>>3){case 1:to.domain=tr.string();break;case 2:to.version=tr.int64();break;default:tr.skipType(7&ta)}}return to},tr.decodeDelimited=function(tr){return tr instanceof tl||(tr=new tl(tr)),this.decode(tr,tr.uint32())},tr.verify=function(tr){return"object"!=typeof tr||null===tr?"object expected":null!=tr.domain&&tr.hasOwnProperty("domain")&&!tp.isString(tr.domain)?"domain: string expected":null!=tr.version&&tr.hasOwnProperty("version")&&!(tp.isInteger(tr.version)||tr.version&&tp.isInteger(tr.version.low)&&tp.isInteger(tr.version.high))?"version: integer|Long expected":null},tr.fromObject=function(tr){if(tr instanceof tf.onnx.OperatorSetIdProto)return tr;var tn=new tf.onnx.OperatorSetIdProto;return null!=tr.domain&&(tn.domain=String(tr.domain)),null!=tr.version&&(tp.Long?(tn.version=tp.Long.fromValue(tr.version)).unsigned=!1:"string"==typeof tr.version?tn.version=parseInt(tr.version,10):"number"==typeof tr.version?tn.version=tr.version:"object"==typeof tr.version&&(tn.version=new tp.LongBits(tr.version.low>>>0,tr.version.high>>>0).toNumber())),tn},tr.toObject=function(tr,tn){tn||(tn={});var ti={};if(tn.defaults){if(ti.domain="",tp.Long){var to=new tp.Long(0,0,!1);ti.version=tn.longs===String?to.toString():tn.longs===Number?to.toNumber():to}else ti.version=tn.longs===String?"0":0}return null!=tr.domain&&tr.hasOwnProperty("domain")&&(ti.domain=tr.domain),null!=tr.version&&tr.hasOwnProperty("version")&&("number"==typeof tr.version?ti.version=tn.longs===String?String(tr.version):tr.version:ti.version=tn.longs===String?tp.Long.prototype.toString.call(tr.version):tn.longs===Number?new tp.LongBits(tr.version.low>>>0,tr.version.high>>>0).toNumber():tr.version),ti},tr.prototype.toJSON=function(){return this.constructor.toObject(this,tu.util.toJSONOptions)},tr}(),ts),tr.exports=tf},2100:(tr,tn,ti)=>{"use strict";tr.exports=ti(9482)},9482:(tr,tn,ti)=>{"use strict";var to=tn;function ta(){to.util._configure(),to.Writer._configure(to.BufferWriter),to.Reader._configure(to.BufferReader)}to.build="minimal",to.Writer=ti(1173),to.BufferWriter=ti(3155),to.Reader=ti(1408),to.BufferReader=ti(593),to.util=ti(9693),to.rpc=ti(5994),to.roots=ti(5054),to.configure=ta,ta()},1408:(tr,tn,ti)=>{"use strict";tr.exports=tc;var to,ta=ti(9693),ts=ta.LongBits,tu=ta.utf8;function tl(tr,tn){return RangeError("index out of range: "+tr.pos+" + "+(tn||1)+" > "+tr.len)}function tc(tr){this.buf=tr,this.pos=0,this.len=tr.length}var tp,tf="undefined"!=typeof Uint8Array?function(tr){if(tr instanceof Uint8Array||Array.isArray(tr))return new tc(tr);throw Error("illegal buffer")}:function(tr){if(Array.isArray(tr))return new tc(tr);throw Error("illegal buffer")},td=function(){return ta.Buffer?function(tr){return(tc.create=function(tr){return ta.Buffer.isBuffer(tr)?new to(tr):tf(tr)})(tr)}:tf};function th(){var tr=new ts(0,0),tn=0;if(!(this.len-this.pos>4)){for(;tn<3;++tn){if(this.pos>=this.len)throw tl(this);if(tr.lo=(tr.lo|(127&this.buf[this.pos])<<7*tn)>>>0,this.buf[this.pos++]<128)return tr}return tr.lo=(tr.lo|(127&this.buf[this.pos++])<<7*tn)>>>0,tr}for(;tn<4;++tn)if(tr.lo=(tr.lo|(127&this.buf[this.pos])<<7*tn)>>>0,this.buf[this.pos++]<128)return tr;if(tr.lo=(tr.lo|(127&this.buf[this.pos])<<28)>>>0,tr.hi=(tr.hi|(127&this.buf[this.pos])>>4)>>>0,this.buf[this.pos++]<128)return tr;if(tn=0,this.len-this.pos>4){for(;tn<5;++tn)if(tr.hi=(tr.hi|(127&this.buf[this.pos])<<7*tn+3)>>>0,this.buf[this.pos++]<128)return tr}else for(;tn<5;++tn){if(this.pos>=this.len)throw tl(this);if(tr.hi=(tr.hi|(127&this.buf[this.pos])<<7*tn+3)>>>0,this.buf[this.pos++]<128)return tr}throw Error("invalid varint encoding")}function tg(tr,tn){return(tr[tn-4]|tr[tn-3]<<8|tr[tn-2]<<16|tr[tn-1]<<24)>>>0}function tb(){if(this.pos+8>this.len)throw tl(this,8);return new ts(tg(this.buf,this.pos+=4),tg(this.buf,this.pos+=4))}tc.create=td(),tc.prototype._slice=ta.Array.prototype.subarray||ta.Array.prototype.slice,tc.prototype.uint32=(tp=4294967295,function(){if(tp=(127&this.buf[this.pos])>>>0,this.buf[this.pos++]<128||(tp=(tp|(127&this.buf[this.pos])<<7)>>>0,this.buf[this.pos++]<128)||(tp=(tp|(127&this.buf[this.pos])<<14)>>>0,this.buf[this.pos++]<128)||(tp=(tp|(127&this.buf[this.pos])<<21)>>>0,this.buf[this.pos++]<128)||(tp=(tp|(15&this.buf[this.pos])<<28)>>>0,this.buf[this.pos++]<128))return tp;if((this.pos+=5)>this.len)throw this.pos=this.len,tl(this,10);return tp}),tc.prototype.int32=function(){return 0|this.uint32()},tc.prototype.sint32=function(){var tr=this.uint32();return tr>>>1^-(1&tr)|0},tc.prototype.bool=function(){return 0!==this.uint32()},tc.prototype.fixed32=function(){if(this.pos+4>this.len)throw tl(this,4);return tg(this.buf,this.pos+=4)},tc.prototype.sfixed32=function(){if(this.pos+4>this.len)throw tl(this,4);return 0|tg(this.buf,this.pos+=4)},tc.prototype.float=function(){if(this.pos+4>this.len)throw tl(this,4);var tr=ta.float.readFloatLE(this.buf,this.pos);return this.pos+=4,tr},tc.prototype.double=function(){if(this.pos+8>this.len)throw tl(this,4);var tr=ta.float.readDoubleLE(this.buf,this.pos);return this.pos+=8,tr},tc.prototype.bytes=function(){var tr=this.uint32(),tn=this.pos,ti=this.pos+tr;if(ti>this.len)throw tl(this,tr);return this.pos+=tr,Array.isArray(this.buf)?this.buf.slice(tn,ti):tn===ti?new this.buf.constructor(0):this._slice.call(this.buf,tn,ti)},tc.prototype.string=function(){var tr=this.bytes();return tu.read(tr,0,tr.length)},tc.prototype.skip=function(tr){if("number"==typeof tr){if(this.pos+tr>this.len)throw tl(this,tr);this.pos+=tr}else do if(this.pos>=this.len)throw tl(this);while(128&this.buf[this.pos++]);return this},tc.prototype.skipType=function(tr){switch(tr){case 0:this.skip();break;case 1:this.skip(8);break;case 2:this.skip(this.uint32());break;case 3:for(;4!=(tr=7&this.uint32());)this.skipType(tr);break;case 5:this.skip(4);break;default:throw Error("invalid wire type "+tr+" at offset "+this.pos)}return this},tc._configure=function(tr){to=tr,tc.create=td(),to._configure();var tn=ta.Long?"toLong":"toNumber";ta.merge(tc.prototype,{int64:function(){return th.call(this)[tn](!1)},uint64:function(){return th.call(this)[tn](!0)},sint64:function(){return th.call(this).zzDecode()[tn](!1)},fixed64:function(){return tb.call(this)[tn](!0)},sfixed64:function(){return tb.call(this)[tn](!1)}})}},593:(tr,tn,ti)=>{"use strict";tr.exports=ts;var to=ti(1408);(ts.prototype=Object.create(to.prototype)).constructor=ts;var ta=ti(9693);function ts(tr){to.call(this,tr)}ts._configure=function(){ta.Buffer&&(ts.prototype._slice=ta.Buffer.prototype.slice)},ts.prototype.string=function(){var tr=this.uint32();return this.buf.utf8Slice?this.buf.utf8Slice(this.pos,this.pos=Math.min(this.pos+tr,this.len)):this.buf.toString("utf-8",this.pos,this.pos=Math.min(this.pos+tr,this.len))},ts._configure()},5054:tr=>{"use strict";tr.exports={}},5994:(tr,tn,ti)=>{"use strict";tn.Service=ti(7948)},7948:(tr,tn,ti)=>{"use strict";tr.exports=ta;var to=ti(9693);function ta(tr,tn,ti){if("function"!=typeof tr)throw TypeError("rpcImpl must be a function");to.EventEmitter.call(this),this.rpcImpl=tr,this.requestDelimited=!!tn,this.responseDelimited=!!ti}(ta.prototype=Object.create(to.EventEmitter.prototype)).constructor=ta,ta.prototype.rpcCall=function tr(tn,ti,ta,ts,tu){if(!ts)throw TypeError("request must be specified");var tl=this;if(!tu)return to.asPromise(tr,tl,tn,ti,ta,ts);if(tl.rpcImpl)try{return tl.rpcImpl(tn,ti[tl.requestDelimited?"encodeDelimited":"encode"](ts).finish(),function(tr,ti){if(tr)return tl.emit("error",tr,tn),tu(tr);if(null!==ti){if(!(ti instanceof ta))try{ti=ta[tl.responseDelimited?"decodeDelimited":"decode"](ti)}catch(tr){return tl.emit("error",tr,tn),tu(tr)}return tl.emit("data",ti,tn),tu(null,ti)}tl.end(!0)})}catch(tr){return tl.emit("error",tr,tn),void setTimeout(function(){tu(tr)},0)}else setTimeout(function(){tu(Error("already ended"))},0)},ta.prototype.end=function(tr){return this.rpcImpl&&(tr||this.rpcImpl(null,null,null),this.rpcImpl=null,this.emit("end").off()),this}},1945:(tr,tn,ti)=>{"use strict";tr.exports=ta;var to=ti(9693);function ta(tr,tn){this.lo=tr>>>0,this.hi=tn>>>0}var ts=ta.zero=new ta(0,0);ts.toNumber=function(){return 0},ts.zzEncode=ts.zzDecode=function(){return this},ts.length=function(){return 1};var tu=ta.zeroHash="\x00\x00\x00\x00\x00\x00\x00\x00";ta.fromNumber=function(tr){if(0===tr)return ts;var tn=tr<0;tn&&(tr=-tr);var ti=tr>>>0,to=(tr-ti)/4294967296>>>0;return tn&&(to=~to>>>0,ti=~ti>>>0,++ti>4294967295&&(ti=0,++to>4294967295&&(to=0))),new ta(ti,to)},ta.from=function(tr){if("number"==typeof tr)return ta.fromNumber(tr);if(to.isString(tr)){if(!to.Long)return ta.fromNumber(parseInt(tr,10));tr=to.Long.fromString(tr)}return tr.low||tr.high?new ta(tr.low>>>0,tr.high>>>0):ts},ta.prototype.toNumber=function(tr){if(!tr&&this.hi>>>31){var tn=1+~this.lo>>>0,ti=~this.hi>>>0;return tn||(ti=ti+1>>>0),-(tn+4294967296*ti)}return this.lo+4294967296*this.hi},ta.prototype.toLong=function(tr){return to.Long?new to.Long(0|this.lo,0|this.hi,!!tr):{low:0|this.lo,high:0|this.hi,unsigned:!!tr}};var tl=String.prototype.charCodeAt;ta.fromHash=function(tr){return tr===tu?ts:new ta((tl.call(tr,0)|tl.call(tr,1)<<8|tl.call(tr,2)<<16|tl.call(tr,3)<<24)>>>0,(tl.call(tr,4)|tl.call(tr,5)<<8|tl.call(tr,6)<<16|tl.call(tr,7)<<24)>>>0)},ta.prototype.toHash=function(){return String.fromCharCode(255&this.lo,this.lo>>>8&255,this.lo>>>16&255,this.lo>>>24,255&this.hi,this.hi>>>8&255,this.hi>>>16&255,this.hi>>>24)},ta.prototype.zzEncode=function(){var tr=this.hi>>31;return this.hi=((this.hi<<1|this.lo>>>31)^tr)>>>0,this.lo=(this.lo<<1^tr)>>>0,this},ta.prototype.zzDecode=function(){var tr=-(1&this.lo);return this.lo=((this.lo>>>1|this.hi<<31)^tr)>>>0,this.hi=(this.hi>>>1^tr)>>>0,this},ta.prototype.length=function(){var tr=this.lo,tn=(this.lo>>>28|this.hi<<4)>>>0,ti=this.hi>>>24;return 0===ti?0===tn?tr<16384?tr<128?1:2:tr<2097152?3:4:tn<16384?tn<128?5:6:tn<2097152?7:8:ti<128?9:10}},9693:function(tr,tn,ti){"use strict";var to=tn;function ta(tr,tn,ti){for(var to=Object.keys(tn),ta=0;ta0)},to.Buffer=function(){try{var tr=to.inquire("buffer").Buffer;return tr.prototype.utf8Write?tr:null}catch(tr){return null}}(),to._Buffer_from=null,to._Buffer_allocUnsafe=null,to.newBuffer=function(tr){return"number"==typeof tr?to.Buffer?to._Buffer_allocUnsafe(tr):new to.Array(tr):to.Buffer?to._Buffer_from(tr):"undefined"==typeof Uint8Array?tr:new Uint8Array(tr)},to.Array="undefined"!=typeof Uint8Array?Uint8Array:Array,to.Long=to.global.dcodeIO&&to.global.dcodeIO.Long||to.global.Long||to.inquire("long"),to.key2Re=/^true|false|0|1$/,to.key32Re=/^-?(?:0|[1-9][0-9]*)$/,to.key64Re=/^(?:[\\x00-\\xff]{8}|-?(?:0|[1-9][0-9]*))$/,to.longToHash=function(tr){return tr?to.LongBits.from(tr).toHash():to.LongBits.zeroHash},to.longFromHash=function(tr,tn){var ti=to.LongBits.fromHash(tr);return to.Long?to.Long.fromBits(ti.lo,ti.hi,tn):ti.toNumber(!!tn)},to.merge=ta,to.lcFirst=function(tr){return tr.charAt(0).toLowerCase()+tr.substring(1)},to.newError=ts,to.ProtocolError=ts("ProtocolError"),to.oneOfGetter=function(tr){for(var tn={},ti=0;ti-1;--ti)if(1===tn[tr[ti]]&&void 0!==this[tr[ti]]&&null!==this[tr[ti]])return tr[ti]}},to.oneOfSetter=function(tr){return function(tn){for(var ti=0;ti{"use strict";tr.exports=td;var to,ta=ti(9693),ts=ta.LongBits,tu=ta.base64,tl=ta.utf8;function tc(tr,tn,ti){this.fn=tr,this.len=tn,this.next=void 0,this.val=ti}function tp(){}function tf(tr){this.head=tr.head,this.tail=tr.tail,this.len=tr.len,this.next=tr.states}function td(){this.len=0,this.head=new tc(tp,0,0),this.tail=this.head,this.states=null}var th=function(){return ta.Buffer?function(){return(td.create=function(){return new to})()}:function(){return new td}};function tg(tr,tn,ti){tn[ti]=255&tr}function tb(tr,tn){this.len=tr,this.next=void 0,this.val=tn}function tm(tr,tn,ti){for(;tr.hi;)tn[ti++]=127&tr.lo|128,tr.lo=(tr.lo>>>7|tr.hi<<25)>>>0,tr.hi>>>=7;for(;tr.lo>127;)tn[ti++]=127&tr.lo|128,tr.lo=tr.lo>>>7;tn[ti++]=tr.lo}function ty(tr,tn,ti){tn[ti]=255&tr,tn[ti+1]=tr>>>8&255,tn[ti+2]=tr>>>16&255,tn[ti+3]=tr>>>24}td.create=th(),td.alloc=function(tr){return new ta.Array(tr)},ta.Array!==Array&&(td.alloc=ta.pool(td.alloc,ta.Array.prototype.subarray)),td.prototype._push=function(tr,tn,ti){return this.tail=this.tail.next=new tc(tr,tn,ti),this.len+=tn,this},tb.prototype=Object.create(tc.prototype),tb.prototype.fn=function(tr,tn,ti){for(;tr>127;)tn[ti++]=127&tr|128,tr>>>=7;tn[ti]=tr},td.prototype.uint32=function(tr){return this.len+=(this.tail=this.tail.next=new tb((tr>>>=0)<128?1:tr<16384?2:tr<2097152?3:tr<268435456?4:5,tr)).len,this},td.prototype.int32=function(tr){return tr<0?this._push(tm,10,ts.fromNumber(tr)):this.uint32(tr)},td.prototype.sint32=function(tr){return this.uint32((tr<<1^tr>>31)>>>0)},td.prototype.uint64=function(tr){var tn=ts.from(tr);return this._push(tm,tn.length(),tn)},td.prototype.int64=td.prototype.uint64,td.prototype.sint64=function(tr){var tn=ts.from(tr).zzEncode();return this._push(tm,tn.length(),tn)},td.prototype.bool=function(tr){return this._push(tg,1,tr?1:0)},td.prototype.fixed32=function(tr){return this._push(ty,4,tr>>>0)},td.prototype.sfixed32=td.prototype.fixed32,td.prototype.fixed64=function(tr){var tn=ts.from(tr);return this._push(ty,4,tn.lo)._push(ty,4,tn.hi)},td.prototype.sfixed64=td.prototype.fixed64,td.prototype.float=function(tr){return this._push(ta.float.writeFloatLE,4,tr)},td.prototype.double=function(tr){return this._push(ta.float.writeDoubleLE,8,tr)};var t_=ta.Array.prototype.set?function(tr,tn,ti){tn.set(tr,ti)}:function(tr,tn,ti){for(var to=0;to>>0;if(!tn)return this._push(tg,1,0);if(ta.isString(tr)){var ti=td.alloc(tn=tu.length(tr));tu.decode(tr,ti,0),tr=ti}return this.uint32(tn)._push(t_,tn,tr)},td.prototype.string=function(tr){var tn=tl.length(tr);return tn?this.uint32(tn)._push(tl.write,tn,tr):this._push(tg,1,0)},td.prototype.fork=function(){return this.states=new tf(this),this.head=this.tail=new tc(tp,0,0),this.len=0,this},td.prototype.reset=function(){return this.states?(this.head=this.states.head,this.tail=this.states.tail,this.len=this.states.len,this.states=this.states.next):(this.head=this.tail=new tc(tp,0,0),this.len=0),this},td.prototype.ldelim=function(){var tr=this.head,tn=this.tail,ti=this.len;return this.reset().uint32(ti),ti&&(this.tail.next=tr.next,this.tail=tn,this.len+=ti),this},td.prototype.finish=function(){for(var tr=this.head.next,tn=this.constructor.alloc(this.len),ti=0;tr;)tr.fn(tr.val,tn,ti),ti+=tr.len,tr=tr.next;return tn},td._configure=function(tr){to=tr,td.create=th(),to._configure()}},3155:(tr,tn,ti)=>{"use strict";tr.exports=ts;var to=ti(1173);(ts.prototype=Object.create(to.prototype)).constructor=ts;var ta=ti(9693);function ts(){to.call(this)}function tu(tr,tn,ti){tr.length<40?ta.utf8.write(tr,tn,ti):tn.utf8Write?tn.utf8Write(tr,ti):tn.write(tr,ti)}ts._configure=function(){ts.alloc=ta._Buffer_allocUnsafe,ts.writeBytesBuffer=ta.Buffer&&ta.Buffer.prototype instanceof Uint8Array&&"set"===ta.Buffer.prototype.set.name?function(tr,tn,ti){tn.set(tr,ti)}:function(tr,tn,ti){if(tr.copy)tr.copy(tn,ti,0,tr.length);else for(var to=0;to>>0;return this.uint32(tn),tn&&this._push(ts.writeBytesBuffer,tn,tr),this},ts.prototype.string=function(tr){var tn=ta.Buffer.byteLength(tr);return this.uint32(tn),tn&&this._push(tu,tn,tr),this},ts._configure()},7714:(tr,tn,ti)=>{"use strict";tn.R=void 0;let to=ti(6919),ta=ti(7448);tn.R=new class{async init(){}async createSessionHandler(tr,tn){let ti=new to.Session(tn);return await ti.loadModel(tr),new ta.OnnxjsSessionHandler(ti)}}},4200:(tr,tn,ti)=>{"use strict";tn.c8=tn.rX=void 0;let to=ti(1670),ta=ti(5381),ts=ti(2157),tu=ti(2306);tn.rX=()=>{if(("number"!=typeof to.env.wasm.initTimeout||to.env.wasm.initTimeout<0)&&(to.env.wasm.initTimeout=0),"boolean"!=typeof to.env.wasm.simd&&(to.env.wasm.simd=!0),"boolean"!=typeof to.env.wasm.proxy&&(to.env.wasm.proxy=!1),"number"!=typeof to.env.wasm.numThreads||!Number.isInteger(to.env.wasm.numThreads)||to.env.wasm.numThreads<=0){let tr="undefined"==typeof navigator?(0,ta.cpus)().length:navigator.hardwareConcurrency;to.env.wasm.numThreads=Math.min(4,Math.ceil((tr||1)/2))}},tn.c8=new class{async init(){(0,tn.rX)(),await (0,ts.initWasm)()}async createSessionHandler(tr,tn){let ti=new tu.OnnxruntimeWebAssemblySessionHandler;return await ti.loadModel(tr,tn),Promise.resolve(ti)}}},6018:function(tr,tn,ti){"use strict";var to=this&&this.__createBinding||(Object.create?function(tr,tn,ti,to){void 0===to&&(to=ti);var ta=Object.getOwnPropertyDescriptor(tn,ti);ta&&!("get"in ta?!tn.__esModule:ta.writable||ta.configurable)||(ta={enumerable:!0,get:function(){return tn[ti]}}),Object.defineProperty(tr,to,ta)}:function(tr,tn,ti,to){void 0===to&&(to=ti),tr[to]=tn[ti]}),ta=this&&this.__exportStar||function(tr,tn){for(var ti in tr)"default"===ti||Object.prototype.hasOwnProperty.call(tn,ti)||to(tn,tr,ti)};Object.defineProperty(tn,"__esModule",{value:!0}),ta(ti(1670),tn);let ts=ti(1670);{let tr=ti(7714).R;(0,ts.registerBackend)("webgl",tr,-10)}{let tr=ti(4200).c8;(0,ts.registerBackend)("cpu",tr,10),(0,ts.registerBackend)("wasm",tr,10),(0,ts.registerBackend)("xnnpack",tr,9)}},246:(tr,tn)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.createAttributeWithCacheKey=void 0;class ti{constructor(tr){Object.assign(this,tr)}get cacheKey(){return this._cacheKey||(this._cacheKey=Object.getOwnPropertyNames(this).sort().map(tr=>`${this[tr]}`).join(";")),this._cacheKey}}tn.createAttributeWithCacheKey=tr=>new ti(tr)},7778:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.Attribute=void 0;let to=ti(1446),ta=ti(9395),ts=ti(9162),tu=ti(2517);var tl=ta.onnxruntime.experimental.fbs;class tc{constructor(tr){if(this._attributes=new Map,null!=tr){for(let tn of tr)tn instanceof to.onnx.AttributeProto?this._attributes.set(tn.name,[tc.getValue(tn),tc.getType(tn)]):tn instanceof tl.Attribute&&this._attributes.set(tn.name(),[tc.getValue(tn),tc.getType(tn)]);if(this._attributes.sizets.Tensor.fromProto(tr));if(tr instanceof tl.Attribute)return ti.map(tr=>ts.Tensor.fromOrtTensor(tr))}if(tn===to.onnx.AttributeProto.AttributeType.STRING&&tr instanceof to.onnx.AttributeProto){let tr=ti;return(0,tu.decodeUtf8String)(tr)}return tn===to.onnx.AttributeProto.AttributeType.STRINGS&&tr instanceof to.onnx.AttributeProto?ti.map(tu.decodeUtf8String):ti}static getValueNoCheck(tr){return tr instanceof to.onnx.AttributeProto?this.getValueNoCheckFromOnnxFormat(tr):this.getValueNoCheckFromOrtFormat(tr)}static getValueNoCheckFromOnnxFormat(tr){switch(tr.type){case to.onnx.AttributeProto.AttributeType.FLOAT:return tr.f;case to.onnx.AttributeProto.AttributeType.INT:return tr.i;case to.onnx.AttributeProto.AttributeType.STRING:return tr.s;case to.onnx.AttributeProto.AttributeType.TENSOR:return tr.t;case to.onnx.AttributeProto.AttributeType.GRAPH:return tr.g;case to.onnx.AttributeProto.AttributeType.FLOATS:return tr.floats;case to.onnx.AttributeProto.AttributeType.INTS:return tr.ints;case to.onnx.AttributeProto.AttributeType.STRINGS:return tr.strings;case to.onnx.AttributeProto.AttributeType.TENSORS:return tr.tensors;case to.onnx.AttributeProto.AttributeType.GRAPHS:return tr.graphs;default:throw Error(`unsupported attribute type: ${to.onnx.AttributeProto.AttributeType[tr.type]}`)}}static getValueNoCheckFromOrtFormat(tr){switch(tr.type()){case tl.AttributeType.FLOAT:return tr.f();case tl.AttributeType.INT:return tr.i();case tl.AttributeType.STRING:return tr.s();case tl.AttributeType.TENSOR:return tr.t();case tl.AttributeType.GRAPH:return tr.g();case tl.AttributeType.FLOATS:return tr.floatsArray();case tl.AttributeType.INTS:{let tn=[];for(let ti=0;ti{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.resolveBackend=tn.backend=void 0;let to=ti(5038),ta=new Map;async function ts(tr){let ti=tn.backend;if(void 0!==ti[tr]&&function(tr){let tn=tr;return"initialize"in tn&&"function"==typeof tn.initialize&&"createSessionHandler"in tn&&"function"==typeof tn.createSessionHandler&&"dispose"in tn&&"function"==typeof tn.dispose}(ti[tr])){let tn=ti[tr],to=tn.initialize();if("object"==typeof to&&"then"in to&&(to=await to),to)return ta.set(tr,tn),tn}}tn.backend={webgl:new to.WebGLBackend},tn.resolveBackend=async function tr(tn){if(!tn)return tr(["webgl"]);{let tr="string"==typeof tn?[tn]:tn;for(let tn of tr){let tr=ta.get(tn);if(tr)return tr;let ti=await ts(tn);if(ti)return ti}}throw Error("no available backend to use")}},5038:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.WebGLBackend=void 0;let to=ti(1670),ta=ti(6231),ts=ti(6416),tu=ti(7305);tn.WebGLBackend=class{get contextId(){return to.env.webgl.contextId}set contextId(tr){to.env.webgl.contextId=tr}get matmulMaxBatchSize(){return to.env.webgl.matmulMaxBatchSize}set matmulMaxBatchSize(tr){to.env.webgl.matmulMaxBatchSize=tr}get textureCacheMode(){return to.env.webgl.textureCacheMode}set textureCacheMode(tr){to.env.webgl.textureCacheMode=tr}get pack(){return to.env.webgl.pack}set pack(tr){to.env.webgl.pack=tr}get async(){return to.env.webgl.async}set async(tr){to.env.webgl.async=tr}initialize(){try{return this.glContext=(0,tu.createWebGLContext)(this.contextId),"number"!=typeof this.matmulMaxBatchSize&&(this.matmulMaxBatchSize=16),"string"!=typeof this.textureCacheMode&&(this.textureCacheMode="full"),"boolean"!=typeof this.pack&&(this.pack=!1),"boolean"!=typeof this.async&&(this.async=!1),ta.Logger.setWithEnv(to.env),ta.Logger.verbose("WebGLBackend",`Created WebGLContext: ${typeof this.glContext} with matmulMaxBatchSize: ${this.matmulMaxBatchSize}; textureCacheMode: ${this.textureCacheMode}; pack: ${this.pack}; async: ${this.async}.`),!0}catch(tr){return ta.Logger.warning("WebGLBackend",`Unable to initialize WebGLBackend. ${tr}`),!1}}createSessionHandler(tr){return new ts.WebGLSessionHandler(this,tr)}dispose(){this.glContext.dispose()}}},5107:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.CoordsGlslLib=void 0;let to=ti(2517),ta=ti(8520),ts=ti(5060),tu=ti(7859),tl=ti(9390);class tc extends ta.GlslLib{constructor(tr){super(tr)}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},this.offsetToCoords()),this.coordsToOffset()),this.toVec()),this.valueFrom()),this.getCommonUtilFuncs()),this.getInputsSamplingSnippets()),this.getOutputSamplingSnippet())}getCustomTypes(){return{}}offsetToCoords(){return{offsetToCoords:new ta.GlslLibRoutine("\n vec2 offsetToCoords(int offset, int width, int height) {\n int t = offset / width;\n int s = offset - t*width;\n vec2 coords = (vec2(s,t) + vec2(0.5,0.5)) / vec2(width, height);\n return coords;\n }\n ")}}coordsToOffset(){return{coordsToOffset:new ta.GlslLibRoutine("\n int coordsToOffset(vec2 coords, int width, int height) {\n float s = coords.s * float(width);\n float t = coords.t * float(height);\n int offset = int(t) * width + int(s);\n return offset;\n }\n ")}}getOutputSamplingSnippet(){let tr=this.context.outputTextureLayout;return tr.isPacked?this.getPackedOutputSamplingSnippet(tr):this.getUnpackedOutputSamplingSnippet(tr)}getPackedOutputSamplingSnippet(tr){let tn=tr.unpackedShape,ti=[tr.width,tr.height],to={},tu="getOutputCoords";switch(tn.length){case 0:to[tu]=this.getOutputScalarCoords();break;case 1:to[tu]=this.getOutputPacked1DCoords(tn,ti);break;case 2:to[tu]=this.getOutputPacked2DCoords(tn,ti);break;case 3:to[tu]=this.getOutputPacked3DCoords(tn,ti);break;default:to[tu]=this.getOutputPackedNDCoords(tn,ti)}let tl=` - void setOutput(vec4 val) { - ${(0,ts.getGlsl)(this.context.glContext.version).output} = val; - } - `;return to.floatTextureSetRGBA=new ta.GlslLibRoutine(tl),to}getUnpackedOutputSamplingSnippet(tr){let tn=tr.unpackedShape,ti=[tr.width,tr.height],to={},tu="getOutputCoords";switch(tn.length){case 0:to[tu]=this.getOutputScalarCoords();break;case 1:to[tu]=this.getOutputUnpacked1DCoords(tn,ti);break;case 2:to[tu]=this.getOutputUnpacked2DCoords(tn,ti);break;case 3:to[tu]=this.getOutputUnpacked3DCoords(tn,ti);break;case 4:to[tu]=this.getOutputUnpacked4DCoords(tn,ti);break;case 5:to[tu]=this.getOutputUnpacked5DCoords(tn,ti);break;case 6:to[tu]=this.getOutputUnpacked6DCoords(tn,ti);break;default:throw Error(`Unsupported output dimensionality: ${tn.length}`)}let tl=` - void setOutput(float val) { - ${(0,ts.getGlsl)(this.context.glContext.version).output} = vec4(val, 0, 0, 0); - } - `;return to.floatTextureSetR=new ta.GlslLibRoutine(tl),to}getOutputScalarCoords(){return new ta.GlslLibRoutine("\n int getOutputCoords() {\n return 0;\n }\n ")}getOutputPacked1DCoords(tr,tn){let ti=tn,to="";return to=1===ti[0]?` - int getOutputCoords() { - return 2 * int(TexCoords.y * ${ti[1]}.0); - } - `:1===ti[1]?` - int getOutputCoords() { - return 2 * int(TexCoords.x * ${ti[0]}.0); - } - `:` - int getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${ti[0]}, ${ti[1]})); - return 2 * (resTexRC.y * ${ti[0]} + resTexRC.x); - } - `,new ta.GlslLibRoutine(to)}getOutputPacked2DCoords(tr,tn){let ti="";if(to.ArrayUtil.arraysEqual(tr,tn))return ti=` - ivec2 getOutputCoords() { - return 2 * ivec2(TexCoords.xy * vec2(${tn[0]}, ${tn[1]})); - } - `,new ta.GlslLibRoutine(ti);let ts=tn,tu=Math.ceil(tr[1]/2);return ti=` - ivec2 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${ts[0]}, ${ts[1]})); - - int index = resTexRC.y * ${ts[0]} + resTexRC.x; - - // reverse r and c order for packed texture - int r = imod(index, ${tu}) * 2; - int c = 2 * (index / ${tu}); - - return ivec2(r, c); - } - `,new ta.GlslLibRoutine(ti)}getOutputPacked3DCoords(tr,tn){let ti=[tn[0],tn[1]],to=Math.ceil(tr[2]/2),ts=to*Math.ceil(tr[1]/2),tu=` - ivec3 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${ti[0]}, ${ti[1]})); - int index = resTexRC.y * ${ti[0]} + resTexRC.x; - - int b = index / ${ts}; - index -= b * ${ts}; - - // reverse r and c order for packed texture - int r = imod(index, ${to}) * 2; - int c = 2 * (index / ${to}); - - return ivec3(b, r, c); - } - `;return new ta.GlslLibRoutine(tu)}getOutputPackedNDCoords(tr,tn){let ti=[tn[0],tn[1]],to=Math.ceil(tr[tr.length-1]/2),ts=to*Math.ceil(tr[tr.length-2]/2),tu=ts,tl="",tc="b, r, c";for(let tn=2;tn=0;--tn)ts[tn]=ts[tn+1]*tr[tn+1];let tu=["r","c","d"],tl=ts.map((tr,tn)=>`int ${tu[tn]} = index / ${tr}; ${tn===ts.length-1?`int ${tu[tn+1]} = index - ${tu[tn]} * ${tr}`:`index -= ${tu[tn]} * ${tr}`};`).join("");return ti=` - ivec3 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${tn[0]}, ${tn[1]})); - int index = resTexRC.y * ${tn[0]} + resTexRC.x; - ${tl} - return ivec3(r, c, d); - } - `,new ta.GlslLibRoutine(ti)}getOutputUnpacked4DCoords(tr,tn){let ti="",to=tr.length,ts=null;to<2&&(ts=[]),(ts=Array(to-1))[to-2]=tr[to-1];for(let tn=to-3;tn>=0;--tn)ts[tn]=ts[tn+1]*tr[tn+1];let tu=["r","c","d","d2"],tl=ts.map((tr,tn)=>`int ${tu[tn]} = index / ${tr}; ${tn===ts.length-1?`int ${tu[tn+1]} = index - ${tu[tn]} * ${tr}`:`index -= ${tu[tn]} * ${tr}`};`).join("");return ti=` - ivec4 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${tn[0]}, ${tn[1]})); - int index = resTexRC.y * ${tn[0]} + resTexRC.x; - ${tl} - return ivec4(r, c, d, d2); - } - `,new ta.GlslLibRoutine(ti)}getOutputUnpacked5DCoords(tr,tn){let ti="",to=tr.length,ts=null;to<2&&(ts=[]),(ts=Array(to-1))[to-2]=tr[to-1];for(let tn=to-3;tn>=0;--tn)ts[tn]=ts[tn+1]*tr[tn+1];let tu=["r","c","d","d2","d3"],tl=ts.map((tr,tn)=>`int ${tu[tn]} = index / ${tr}; ${tn===ts.length-1?`int ${tu[tn+1]} = index - ${tu[tn]} * ${tr}`:`index -= ${tu[tn]} * ${tr}`};`).join("");return ti=` - ivec5 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${tn[0]}, ${tn[1]})); - int index = resTexRC.y * ${tn[0]} + resTexRC.x; - ${tl} - return ivec5(r, c, d, d2, d3); - } - `,new ta.GlslLibRoutine(ti)}getOutputUnpacked6DCoords(tr,tn){let ti="",to=tr.length,ts=null;to<2&&(ts=[]),(ts=Array(to-1))[to-2]=tr[to-1];for(let tn=to-3;tn>=0;--tn)ts[tn]=ts[tn+1]*tr[tn+1];let tu=["r","c","d","d2","d3","d4"],tl=ts.map((tr,tn)=>`int ${tu[tn]} = index / ${tr}; ${tn===ts.length-1?`int ${tu[tn+1]} = index - ${tu[tn]} * ${tr}`:`index -= ${tu[tn]} * ${tr}`};`).join("");return ti=` - ivec6 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${tn[0]}, ${tn[1]})); - int index = resTexRC.y * ${tn[0]} + resTexRC.x; - ${tl} - return ivec6(r, c, d, d2, d3, d4); - } - `,new ta.GlslLibRoutine(ti)}getCommonUtilFuncs(){let tr={},tn="uvFromFlat";tr[tn]=new ta.GlslLibRoutine("\n vec2 uvFromFlat(int texNumR, int texNumC, int index) {\n int texC = index / texNumR;\n int texR = index - texC * texNumR;\n // TODO: swap texR, texC order in following function so row is corresponding to u and column is corresponding to\n // v.\n return (vec2(texR, texC) + halfCR) / vec2(texNumR, texNumC);\n }\n "),tr[tn="packedUVfrom1D"]=new ta.GlslLibRoutine("\n vec2 packedUVfrom1D(int texNumR, int texNumC, int index) {\n int texelIndex = index / 2;\n int texR = texelIndex / texNumC;\n int texC = texelIndex - texR * texNumC;\n return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);\n }\n "),tr[tn="packedUVfrom2D"]=new ta.GlslLibRoutine("\n vec2 packedUVfrom2D(int texNumR, int texNumC, int texelsInLogicalRow, int row, int col) {\n int texelIndex = (row / 2) * texelsInLogicalRow + (col / 2);\n int texR = texelIndex / texNumC;\n int texC = texelIndex - texR * texNumC;\n return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);\n }\n "),tr[tn="packedUVfrom3D"]=new ta.GlslLibRoutine("\n vec2 packedUVfrom3D(int texNumR, int texNumC,\n int texelsInBatch, int texelsInLogicalRow, int b,\n int row, int col) {\n int index = b * texelsInBatch + (row / 2) * texelsInLogicalRow + (col / 2);\n int texR = index / texNumC;\n int texC = index - texR * texNumC;\n return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);\n }\n "),tn="sampleTexture";let ti=(0,ts.getGlsl)(this.context.glContext.version);return tr[tn]=new ta.GlslLibRoutine(` - float sampleTexture(sampler2D textureSampler, vec2 uv) { - return ${ti.texture2D}(textureSampler, uv).r; - }`),tr}getInputsSamplingSnippets(){let tr={},tn=this.context.outputTextureLayout;return this.context.programInfo.inputNames.forEach((ti,to)=>{let ta=this.context.inputTextureLayouts[to],ts=(0,tl.generateShaderFuncNameFromInputSamplerName)(ti);ta.isPacked?tr[ts]=this.getPackedSamplerFromInput(ts,ti,ta):tr[ts]=this.getUnpackedSamplerFromInput(ts,ti,ta);let tu=(0,tl.generateShaderFuncNameFromInputSamplerNameAtOutCoords)(ti);ta.unpackedShape.length<=tn.unpackedShape.length&&(ta.isPacked?tr[tu]=this.getPackedSamplerAtOutputCoords(tu,ta,tn,ti):tr[tu]=this.getUnpackedSamplerAtOutputCoords(tu,ta,tn,ti))}),tr}getPackedSamplerAtOutputCoords(tr,tn,ti,ts){let tu;let tc=tn.unpackedShape,tp=ti.unpackedShape,tf=ts,td=(0,tl.generateShaderFuncNameFromInputSamplerName)(tf),th=tc.length,tg=tp.length,tb=to.BroadcastUtil.getBroadcastDims(tc,tp),tm=(0,tl.getCoordsDataType)(tg),ty=tg-th,t_=(0,tl.getGlChannels)();tu=0===th?"":tg<2&&tb.length>=1?"coords = 0;":tb.map(tr=>`coords.${t_[tr+ty]} = 0;`).join("\n");let tv="";tv=tg<2&&th>0?"coords":tc.map((tr,tn)=>`coords.${t_[tn+ty]}`).join(", ");let tx="return outputValue;",tw=1===to.ShapeUtil.size(tc),tT=1===to.ShapeUtil.size(tp);if(1!==th||tw||tT){if(tw&&!tT)tx=1===tg?"\n return vec4(outputValue.x, outputValue.x, 0., 0.);\n ":"\n return vec4(outputValue.x);\n ";else if(tb.length){let tr=th-2,tn=th-1;tb.indexOf(tr)>-1&&tb.indexOf(tn)>-1?tx="return vec4(outputValue.x);":tb.indexOf(tr)>-1?tx="return vec4(outputValue.x, outputValue.y, outputValue.x, outputValue.y);":tb.indexOf(tn)>-1&&(tx="return vec4(outputValue.xx, outputValue.zz);")}}else tx="\n return vec4(outputValue.xy, outputValue.xy);\n ";let tS=` - vec4 ${tr}() { - ${tm} coords = getOutputCoords(); - - int lastDim = coords.${t_[tg-1]}; - coords.${t_[tg-1]} = coords.${t_[tg-2]}; - coords.${t_[tg-2]} = lastDim; - - ${tu} - vec4 outputValue = ${td}(${tv}); - ${tx} - } - `;return new ta.GlslLibRoutine(tS,["coordinates.getOutputCoords"])}getUnpackedSamplerAtOutputCoords(tr,tn,ti,ts){let tu;let tc=[ti.width,ti.height],tp=[tn.width,tn.height],tf=tn.unpackedShape.length,td=ti.unpackedShape.length,th=tn.unpackedShape,tg=ti.unpackedShape,tb=(0,tl.generateShaderFuncNameFromInputSamplerName)(ts);if(tf===td&&to.ArrayUtil.arraysEqual(tp,tc)){let tn=` - float ${tr}() { - return sampleTexture(${ts}, TexCoords); - } - `;return new ta.GlslLibRoutine(tn,["coordinates.sampleTexture"])}let tm=(0,tl.getCoordsDataType)(td),ty=to.BroadcastUtil.getBroadcastDims(th,tg),t_=td-tf,tv=(0,tl.getGlChannels)();tu=0===tf?"":td<2&&ty.length>=1?"coords = 0;":ty.map(tr=>`coords.${tv[tr+t_]} = 0;`).join("\n");let tx="";tx=td<2&&tf>0?"coords":tn.unpackedShape.map((tr,tn)=>`coords.${tv[tn+t_]}`).join(", ");let tw=` - float ${tr}() { - ${tm} coords = getOutputCoords(); - ${tu} - return ${tb}(${tx}); - } - `;return new ta.GlslLibRoutine(tw,["coordinates.getOutputCoords"])}getPackedSamplerFromInput(tr,tn,ti){switch(ti.unpackedShape.length){case 0:return this.getPackedSamplerScalar(tr,tn);case 1:return this.getPackedSampler1D(tr,tn,ti);case 2:return this.getPackedSampler2D(tr,tn,ti);case 3:return this.getPackedSampler3D(tr,tn,ti);default:return this.getPackedSamplerND(tr,tn,ti)}}getUnpackedSamplerFromInput(tr,tn,ti){let to=ti.unpackedShape;switch(to.length){case 0:return this.getUnpackedSamplerScalar(tr,tn,ti);case 1:return this.getUnpackedSampler1D(tr,tn,ti);case 2:return this.getUnpackedSampler2D(tr,tn,ti);case 3:return this.getUnpackedSampler3D(tr,tn,ti);case 4:return this.getUnpackedSampler4D(tr,tn,ti);case 5:return this.getUnpackedSampler5D(tr,tn,ti);case 6:return this.getUnpackedSampler6D(tr,tn,ti);default:throw Error(`Unsupported dimension ${to.length}-D`)}}getPackedSamplerScalar(tr,tn){let ti=` - vec4 ${tr}() { - return ${(0,ts.getGlsl)(this.context.glContext.version).texture2D}(${tn}, halfCR); - } - `;return new ta.GlslLibRoutine(ti)}getPackedSampler1D(tr,tn,ti){let to=[ti.width,ti.height],tu=[to[1],to[0]],tl=(0,ts.getGlsl)(this.context.glContext.version),tc=`vec4 ${tr}(int index) { - vec2 uv = packedUVfrom1D( - ${tu[0]}, ${tu[1]}, index); - return ${tl.texture2D}(${tn}, uv); - }`;return new ta.GlslLibRoutine(tc,["coordinates.packedUVfrom1D"])}getPackedSampler2D(tr,tn,ti){let tu=ti.unpackedShape,tl=[ti.width,ti.height],tc=(0,ts.getGlsl)(this.context.glContext.version),tp=tl[0],tf=tl[1];if(null!=tl&&to.ArrayUtil.arraysEqual(tu,tl)){let ti=`vec4 ${tr}(int row, int col) { - vec2 uv = (vec2(col, row) + halfCR) / vec2(${tf}.0, ${tp}.0); - return ${tc.texture2D}(${tn}, uv); - }`;return new ta.GlslLibRoutine(ti)}let td=tl,th=Math.ceil(tu[1]/2),tg=`vec4 ${tr}(int row, int col) { - vec2 uv = packedUVfrom2D(${td[1]}, ${td[0]}, ${th}, row, col); - return ${tc.texture2D}(${tn}, uv); - }`;return new ta.GlslLibRoutine(tg,["coordinates.packedUVfrom2D"])}getPackedSampler3D(tr,tn,ti){let to=ti.unpackedShape,tu=[ti.width,ti.height],tc=[tu[0],tu[1]],tp=(0,ts.getGlsl)(this.context.glContext.version);if(1===to[0]){let ts=to.slice(1),tu=[1,2],tc=(0,tl.squeezeInputShape)(to,ts),tp=["b","row","col"],tf=JSON.parse(JSON.stringify(ti));tf.unpackedShape=tc;let td=this.getPackedSamplerFromInput(tr,tn,tf),th=`${td.routineBody} - vec4 ${tr}(int b, int row, int col) { - return ${tr}(${(0,tl.getSqueezedParams)(tp,tu)}); - } `;return new ta.GlslLibRoutine(th,td.dependencies)}let tf=tc[0],td=tc[1],th=Math.ceil(to[2]/2),tg=`vec4 ${tr}(int b, int row, int col) { - vec2 uv = packedUVfrom3D( - ${td}, ${tf}, ${th*Math.ceil(to[1]/2)}, ${th}, b, row, col); - return ${tp.texture2D}(${tn}, uv);}`;return new ta.GlslLibRoutine(tg,["coordinates.packedUVfrom3D"])}getPackedSamplerND(tr,tn,ti){let to=ti.unpackedShape,tu=to.length,tl=[ti.width,ti.height],tc=(0,ts.getGlsl)(this.context.glContext.version),tp=[tl[0],tl[1]],tf=tp[1],td=tp[0],th=Math.ceil(to[tu-1]/2),tg=th*Math.ceil(to[tu-2]/2),tb="int b, int row, int col",tm=`b * ${tg} + (row / 2) * ${th} + (col / 2)`;for(let tr=2;tr{let to=this.context.inputTextureLayouts[ti],ts=(to.unpackedShape.length>0?to.unpackedShape:to.shape).length,tu=`_${tn}`;tr[tu]=new ta.GlslLibRoutine(this.getValueFromSingle(tn,ts,to.width,to.height,!1),[`shapeUtils.indicesToOffset${tu}`,"coordinates.offsetToCoords","fragcolor.getColorAsFloat"]),tr[tu+="_T"]=new ta.GlslLibRoutine(this.getValueFromSingle(tn,ts,to.width,to.height,!0),[`shapeUtils.indicesToOffset${tu}`,"coordinates.offsetToCoords","fragcolor.getColorAsFloat"])}),tr}getValueFromSingle(tr,tn,ti,to,ta){let tu=`_${tr}`;return ta&&(tu+="_T"),` - float ${tu}(int m[${tn}]) { - int offset = indicesToOffset${tu}(m); - vec2 coords = offsetToCoords(offset, ${ti}, ${to}); - float value = getColorAsFloat(${(0,ts.getGlsl)(this.context.glContext.version).texture2D}(${tr}, coords)); - return value; - } - `}getPackedValueFrom(tr,tn,ti,to,ta){let tu=`_${tr}_Pack`;return ta&&(tu+="_T"),` - vec4 ${tu}(int m[${tn}]) { - int offset = indicesToOffset_${tr}(m); - vec2 coords = offsetToCoords(offset, ${ti}, ${to}); - return ${(0,ts.getGlsl)(this.context.glContext.version).texture2D}(${tr}, coords); - } - `}}tn.CoordsGlslLib=tc},8520:(tr,tn)=>{"use strict";var ti;Object.defineProperty(tn,"__esModule",{value:!0}),tn.TopologicalSortGlslRoutines=tn.GlslLibRoutineNode=tn.GlslLibRoutine=tn.GlslLib=tn.GlslContext=tn.FunctionType=void 0,(ti=tn.FunctionType||(tn.FunctionType={}))[ti.ValueBased=0]="ValueBased",ti[ti.Positional=1]="Positional",tn.GlslContext=class{constructor(tr,tn,ti,to){this.glContext=tr,this.programInfo=tn,this.inputTextureLayouts=ti,this.outputTextureLayout=to}},tn.GlslLib=class{constructor(tr){this.context=tr}},tn.GlslLibRoutine=class{constructor(tr,tn){this.routineBody=tr,this.dependencies=tn}},tn.GlslLibRoutineNode=class{constructor(tr,tn,ti){this.name=tr,this.dependencies=ti||[],tn&&(this.routineBody=tn)}addDependency(tr){tr&&this.dependencies.push(tr)}},tn.TopologicalSortGlslRoutines=class{static returnOrderedNodes(tr){if(!tr||0===tr.length)return[];if(1===tr.length)return tr;let tn=new Set,ti=new Set,to=[];return this.createOrderedNodes(tr,tn,ti,to),to}static createOrderedNodes(tr,tn,ti,to){for(let ta=0;ta0)for(let tr=0;tr{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.EncodingGlslLib=void 0;let to=ti(8520);class ta extends to.GlslLib{constructor(tr){super(tr)}getFunctions(){return Object.assign(Object.assign({},this.encodeFloat32()),this.decodeFloat32())}getCustomTypes(){return{}}encodeFloat32(){return{encode:new to.GlslLibRoutine("highp vec4 encode(highp float f) {\n return vec4(f, 0.0, 0.0, 0.0);\n }\n ")}}decodeFloat32(){return{decode:new to.GlslLibRoutine("highp float decode(highp vec4 rgba) {\n return rgba.r;\n }\n ")}}encodeUint8(){let tr=ta.isLittleEndian()?"rgba.rgba=rgba.abgr;":"";return{encode:new to.GlslLibRoutine(` - highp vec4 encode(highp float f) { - highp float F = abs(f); - highp float Sign = step(0.0,-f); - highp float Exponent = floor(log2(F)); - highp float Mantissa = (exp2(- Exponent) * F); - Exponent = floor(log2(F) + 127.0) + floor(log2(Mantissa)); - highp vec4 rgba; - rgba[0] = 128.0 * Sign + floor(Exponent*exp2(-1.0)); - rgba[1] = 128.0 * mod(Exponent,2.0) + mod(floor(Mantissa*128.0),128.0); - rgba[2] = floor(mod(floor(Mantissa*exp2(23.0 -8.0)),exp2(8.0))); - rgba[3] = floor(exp2(23.0)*mod(Mantissa,exp2(-15.0))); - ${tr} - rgba = rgba / 255.0; // values need to be normalized to [0,1] - return rgba; - } - `)}}decodeUint8(){let tr=ta.isLittleEndian()?"rgba.rgba=rgba.abgr;":"";return{decode:new to.GlslLibRoutine(` - highp float decode(highp vec4 rgba) { - rgba = rgba * 255.0; // values need to be de-normalized from [0,1] to [0,255] - ${tr} - highp float Sign = 1.0 - step(128.0,rgba[0])*2.0; - highp float Exponent = 2.0 * mod(rgba[0],128.0) + step(128.0,rgba[1]) - 127.0; - highp float Mantissa = mod(rgba[1],128.0)*65536.0 + rgba[2]*256.0 +rgba[3] + float(0x800000); - highp float Result = Sign * exp2(Exponent) * (Mantissa * exp2(-23.0 )); - return Result; - } - `)}}static isLittleEndian(){let tr=new ArrayBuffer(4),tn=new Uint32Array(tr),ti=new Uint8Array(tr);if(tn[0]=3735928559,239===ti[0])return!0;if(222===ti[0])return!1;throw Error("unknown endianness")}}tn.EncodingGlslLib=ta},9894:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.FragColorGlslLib=void 0;let to=ti(8520),ta=ti(5060);class ts extends to.GlslLib{constructor(tr){super(tr)}getFunctions(){return Object.assign(Object.assign({},this.setFragColor()),this.getColorAsFloat())}getCustomTypes(){return{}}setFragColor(){let tr=(0,ta.getGlsl)(this.context.glContext.version);return{setFragColor:new to.GlslLibRoutine(` - void setFragColor(float value) { - ${tr.output} = encode(value); - } - `,["encoding.encode"])}}getColorAsFloat(){return{getColorAsFloat:new to.GlslLibRoutine("\n float getColorAsFloat(vec4 color) {\n return decode(color);\n }\n ",["encoding.decode"])}}}tn.FragColorGlslLib=ts},2848:(tr,tn)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.replaceInlines=void 0;let ti=/@inline[\s\n\r]+(\w+)[\s\n\r]+([0-9a-zA-Z_]+)\s*\(([^)]*)\)\s*{(([^}]|[\n\r])*)}/gm;tn.replaceInlines=function(tr){let tn;let to={};for(;null!==(tn=ti.exec(tr));){let tr=tn[3].split(",").map(tr=>{let tn=tr.trim().split(" ");return tn&&2===tn.length?{type:tn[0],name:tn[1]}:null}).filter(tr=>null!==tr);to[tn[2]]={params:tr,body:tn[4]}}for(let ti in to){let ta="(\\w+)?\\s+([_0-9a-zA-Z]+)\\s+=\\s+__FUNC__\\((.*)\\)\\s*;".replace("__FUNC__",ti),ts=RegExp(ta,"gm");for(;null!==(tn=ts.exec(tr));){let ta=tn[1],ts=tn[2],tu=tn[3].split(","),tl=ta?`${ta} ${ts};`:"",tc=to[ti].body,tp="";to[ti].params.forEach((tr,tn)=>{tr&&(tp+=`${tr.type} ${tr.name} = ${tu[tn]}; -`)}),tc=(tc=`${tp} - ${tc}`).replace("return",`${ts} = `);let tf=` - ${tl} - { - ${tc} - } - `;tr=tr.replace(tn[0],tf)}}return tr.replace(ti,"")}},8879:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.GlslPreprocessor=void 0;let to=ti(8520),ta=ti(2848),ts=ti(5483),tu=ti(5060);tn.GlslPreprocessor=class{constructor(tr,tn,ti,ta){this.libs={},this.glslLibRoutineDependencyGraph={},this.context=new to.GlslContext(tr,tn,ti,ta),Object.keys(ts.glslRegistry).forEach(tr=>{let tn=new ts.glslRegistry[tr](this.context);this.libs[tr]=tn});let tu=this.glslLibRoutineDependencyGraph;for(let tr in this.libs){let tn=this.libs[tr].getFunctions();for(let ti in tn){let ta;let ts=tr+"."+ti;tu[ts]?(ta=tu[ts]).routineBody=tn[ti].routineBody:(ta=new to.GlslLibRoutineNode(ts,tn[ti].routineBody),tu[ts]=ta);let tl=tn[ti].dependencies;if(tl)for(let tr=0;tr{let to=ti.split(".")[1];-1!==tr.indexOf(to)&&tn.push(this.glslLibRoutineDependencyGraph[ti])}),to.TopologicalSortGlslRoutines.returnOrderedNodes(tn)}getUniforms(tr,tn){let ti=[];if(tr)for(let tn of tr)ti.push(`uniform sampler2D ${tn};`);if(tn)for(let tr of tn)ti.push(`uniform ${tr.type} ${tr.name}${tr.arrayLength?`[${tr.arrayLength}]`:""};`);return ti.join("\n")}}},5483:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.glslRegistry=void 0;let to=ti(5107),ta=ti(7341),ts=ti(9894),tu=ti(2655),tl=ti(3891);tn.glslRegistry={encoding:ta.EncodingGlslLib,fragcolor:ts.FragColorGlslLib,vec:tl.VecGlslLib,shapeUtils:tu.ShapeUtilsGlslLib,coordinates:to.CoordsGlslLib}},2655:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.ShapeUtilsGlslLib=void 0;let to=ti(8520);class ta extends to.GlslLib{constructor(tr){super(tr)}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},this.bcastIndex()),this.bcastMatmulIndex()),this.offsetToIndices()),this.indicesToOffset()),this.incrementIndices())}getCustomTypes(){return{}}bcastIndex(){let tr=this.context.outputTextureLayout.shape.length,tn={};return this.context.programInfo.inputNames.forEach((ti,ta)=>{let ts=this.context.inputTextureLayouts[ta].unpackedShape;if(ts.length<=tr){let ta=ts.length,tu=tr-ta,tl=`bcastIndices_${ti}`,tc="";for(let tr=0;tr{let ts=this.context.inputTextureLayouts[ta].shape;if(!(ts.length<2||ts.length>tr)){let ta=ts.length,tu=tr-ta,tl=`bcastMatmulIndices_${ti}`,tc="";for(let tr=0;tr{let ts=this.context.inputTextureLayouts[ti].shape,tu=this.context.inputTextureLayouts[ti].strides,tl=ts.length,tc=`indicesToOffset_${tn}`;tr[tc]=new to.GlslLibRoutine(ta.indexToOffsetSingle(tc,tl,tu)),tr[tc=`indicesToOffset_${tn}_T`]=new to.GlslLibRoutine(ta.indexToOffsetSingle(tc,tl,tu.slice().reverse()))}),tr}static indexToOffsetSingle(tr,tn,ti){let to="";for(let tr=tn-1;tr>=0;--tr)to+=` - offset += indices[${tr}] * ${ti[tr]}; - `;return` - int ${tr}(int indices[${tn}]) { - int offset = 0; - ${to} - return offset; - } - `}offsetToIndices(){let tr={};return this.context.programInfo.inputNames.forEach((tn,ti)=>{let ts=this.context.inputTextureLayouts[ti].shape,tu=this.context.inputTextureLayouts[ti].strides,tl=ts.length,tc=`offsetToIndices_${tn}`;tr[tc]=new to.GlslLibRoutine(ta.offsetToIndicesSingle(tc,tl,tu)),tr[tc=`offsetToIndices_${tn}_T`]=new to.GlslLibRoutine(ta.offsetToIndicesSingle(tc,tl,tu.slice().reverse()))}),tr}static offsetToIndicesSingle(tr,tn,ti){let to=[];for(let tr=0;tr{let ta=this.context.inputTextureLayouts[ti].shape,ts=ta.length,tu=`incrementIndices_${tn}`,tl="";for(let tr=0;tr= 0; --i) { - if(i > axis) continue; - indices[i] += 1; - if(indices[i] < shape[i]) { - break; - } - indices[i] = 0; - } - } - `;tr[tu]=new to.GlslLibRoutine(tc)}),tr}}tn.ShapeUtilsGlslLib=ta},5060:(tr,tn)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.getDefaultFragShaderMain=tn.getFragShaderPreamble=tn.getVertexShaderSource=tn.getGlsl=void 0;let ti={version:"",attribute:"attribute",varyingVertex:"varying",varyingFrag:"varying",texture2D:"texture2D",output:"gl_FragColor",outputDeclaration:""},to={version:"#version 300 es",attribute:"in",varyingVertex:"out",varyingFrag:"in",texture2D:"texture",output:"outputColor",outputDeclaration:"out vec4 outputColor;"};function ta(tr){return 1===tr?ti:to}tn.getGlsl=ta,tn.getVertexShaderSource=function(tr){let tn=ta(tr);return`${tn.version} - precision highp float; - ${tn.attribute} vec3 position; - ${tn.attribute} vec2 textureCoord; - - ${tn.varyingVertex} vec2 TexCoords; - - void main() - { - gl_Position = vec4(position, 1.0); - TexCoords = textureCoord; - }`},tn.getFragShaderPreamble=function(tr){let tn=ta(tr);return`${tn.version} - precision highp float; - precision highp int; - precision highp sampler2D; - ${tn.varyingFrag} vec2 TexCoords; - ${tn.outputDeclaration} - const vec2 halfCR = vec2(0.5, 0.5); - - // Custom vector types to handle higher dimenalities. - struct ivec5 - { - int x; - int y; - int z; - int w; - int u; - }; - - struct ivec6 - { - int x; - int y; - int z; - int w; - int u; - int v; - }; - - int imod(int x, int y) { - return x - y * (x / y); - } - - `},tn.getDefaultFragShaderMain=function(tr,tn){return` - void main() { - int indices[${tn}]; - toVec(TexCoords, indices); - vec4 result = vec4(process(indices)); - ${ta(tr).output} = result; - } - `}},3891:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.VecGlslLib=void 0;let to=ti(8520);class ta extends to.GlslLib{constructor(tr){super(tr)}getCustomTypes(){return{}}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign({},this.binaryVecFunctions()),this.copyVec()),this.setVecItem()),this.getVecItem())}binaryVecFunctions(){let tr=this.context.outputTextureLayout.shape.length,tn={add:"+=",sub:"-=",mul:"*=",div:"/="},ti={};for(let ta in tn){let ts=`${ta}Vec`,tu="";for(let ti=0;ti{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.WebGLInferenceHandler=void 0;let to=ti(6231),ta=ti(9162),ts=ti(2517),tu=ti(2403),tl=ti(7019),tc=ti(8710),tp=ti(5611),tf=ti(4057),td=ti(2039);tn.WebGLInferenceHandler=class{constructor(tr){this.session=tr,this.packedTextureDataCache=new Map,this.unpackedTextureDataCache=new Map}calculateTextureWidthAndHeight(tr,tn){return(0,tf.calculateTextureWidthAndHeight)(this.session.layoutStrategy,tr,tn)}executeProgram(tr,tn){if(tn.length{let ti=tn.map(tr=>`${tr.unpackedShape.join(",")};${tr.width}x${tr.height}`).join("_"),to=tr.name;return tr.cacheHint&&(to+="["+tr.cacheHint+"]"),to+=":"+ti})(tr,ti),ta=this.session.programManager.getArtifact(to),ts=ta?ta.programInfo:"function"==typeof tr.get?tr.get():tr,tu=(0,tf.createTextureLayoutFromTextureType)(this.session.layoutStrategy,ts.output.dims,ts.output.textureType),tl=this.createTextureData(tu,ts.output.type);return ta||(ta=this.session.programManager.build(ts,ti,tl),this.session.programManager.setArtifact(to,ta)),this.runProgram(ta,ti,tl),tl}run(tr,tn){return this.executeProgram(tr,tn).tensor}runProgram(tr,tn,ti){for(let ti=0;tithis.readTexture(tu),async tr=>this.readTextureAsync(tu),void 0,ts),texture:ti});return this.setTextureData(tu.tensor.dataId,tu,tr.isPacked),tu}getTextureData(tr,tn=!1){return this.session.isInitializer(tr)?this.session.getTextureData(tr,tn):tn?this.packedTextureDataCache.get(tr):this.unpackedTextureDataCache.get(tr)}setTextureData(tr,tn,ti=!1){this.session.isInitializer(tr)?this.session.setTextureData(tr,tn,ti):(ti?this.packedTextureDataCache:this.unpackedTextureDataCache).set(tr,tn)}isTextureLayoutCached(tr,tn=!1){return!!this.getTextureData(tr.dataId,tn)}dispose(){this.session.textureManager.clearActiveTextures(),this.packedTextureDataCache.forEach(tr=>this.session.textureManager.releaseTexture(tr)),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache.forEach(tr=>this.session.textureManager.releaseTexture(tr)),this.unpackedTextureDataCache=new Map}readTexture(tr){return tr.isPacked?this.readTexture(this.unpack(tr)):this.session.backend.glContext.isFloat32DownloadSupported?this.session.textureManager.readTexture(tr,tr.tensor.type,tr.channels):this.session.textureManager.readUint8TextureAsFloat((0,tc.encodeAsUint8)(this,tr))}async readTextureAsync(tr){return tr.isPacked?this.readTextureAsync(this.unpack(tr)):this.session.backend.glContext.isFloat32DownloadSupported?this.session.textureManager.readTextureAsync(tr,tr.tensor.type,tr.channels):this.session.textureManager.readUint8TextureAsFloat((0,tc.encodeAsUint8)(this,tr))}pack(tr){return this.executeProgram((0,tu.createPackProgramInfoLoader)(this,tr.tensor),[tr.tensor])}unpack(tr){return this.executeProgram((0,tp.createUnpackProgramInfoLoader)(this,tr.tensor),[tr.tensor])}}},1640:function(tr,tn,ti){"use strict";var to=this&&this.__createBinding||(Object.create?function(tr,tn,ti,to){void 0===to&&(to=ti);var ta=Object.getOwnPropertyDescriptor(tn,ti);ta&&!("get"in ta?!tn.__esModule:ta.writable||ta.configurable)||(ta={enumerable:!0,get:function(){return tn[ti]}}),Object.defineProperty(tr,to,ta)}:function(tr,tn,ti,to){void 0===to&&(to=ti),tr[to]=tn[ti]}),ta=this&&this.__setModuleDefault||(Object.create?function(tr,tn){Object.defineProperty(tr,"default",{enumerable:!0,value:tn})}:function(tr,tn){tr.default=tn}),ts=this&&this.__importStar||function(tr){if(tr&&tr.__esModule)return tr;var tn={};if(null!=tr)for(var ti in tr)"default"!==ti&&Object.prototype.hasOwnProperty.call(tr,ti)&&to(tn,tr,ti);return ta(tn,tr),tn};Object.defineProperty(tn,"__esModule",{value:!0}),tn.WEBGL_OP_RESOLVE_RULES=void 0;let tu=ti(2898),tl=ts(ti(7839)),tc=ti(4196),tp=ti(2069),tf=ti(8138),td=ti(9663),th=ti(5193),tg=ti(7992),tb=ti(1253),tm=ti(4776),ty=ti(6572),t_=ti(3346),tv=ti(5623),tx=ti(2870),tw=ti(2143),tT=ti(4939),tS=ti(718),tO=ti(2268),tA=ti(8117),tE=ti(2278),tI=ti(5524),tP=ti(5975),tD=ti(3933),t$=ti(6558),tk=ti(5723),tC=ti(3738),tF=ts(ti(4909)),tN=ti(8428),tL=ti(9793);tn.WEBGL_OP_RESOLVE_RULES=[["Abs","","6+",tF.abs],["Acos","","7+",tF.acos],["Add","","7+",tl.add],["And","","7+",tl.and],["Asin","","7+",tF.asin],["Atan","","7+",tF.atan],["AveragePool","","7+",tw.averagePool,tw.parseAveragePoolAttributes],["BatchNormalization","","7+",tu.batchNormalization,tu.parseBatchNormalizationAttributes],["Cast","","6+",tc.cast,tc.parseCastAttributes],["Ceil","","6+",tF.ceil],["Clip","","6-10",tF.clip,tF.parseClipAttributes],["Clip","","11+",tF.clipV11],["Concat","","4+",tp.concat,tp.parseConcatAttributes],["Conv","","1+",tf.conv,tf.parseConvAttributes],["ConvTranspose","","1+",td.convTranspose,td.parseConvTransposeAttributes],["Cos","","7+",tF.cos],["Div","","7+",tl.div],["Dropout","","7+",tF.identity],["DepthToSpace","","1+",th.depthToSpace,th.parseDepthToSpaceAttributes],["Equal","","7+",tl.equal],["Elu","","6+",tF.elu,tF.parseEluAttributes],["Exp","","6+",tF.exp],["Flatten","","1+",tg.flatten,tg.parseFlattenAttributes],["Floor","","6+",tF.floor],["FusedConv","com.microsoft","1+",tf.conv,tf.parseConvAttributes],["Gather","","1+",tb.gather,tb.parseGatherAttributes],["Gemm","","7-10",tm.gemm,tm.parseGemmAttributesV7],["Gemm","","11+",tm.gemm,tm.parseGemmAttributesV11],["GlobalAveragePool","","1+",tw.globalAveragePool,tw.parseGlobalAveragePoolAttributes],["GlobalMaxPool","","1+",tw.globalMaxPool],["Greater","","7+",tl.greater],["Identity","","1+",tF.identity],["ImageScaler","","1+",ty.imageScaler,ty.parseImageScalerAttributes],["InstanceNormalization","","6+",t_.instanceNormalization,t_.parseInstanceNormalizationAttributes],["LeakyRelu","","6+",tF.leakyRelu,tF.parseLeakyReluAttributes],["Less","","7+",tl.less],["Log","","6+",tF.log],["MatMul","","1+",tv.matMul,tv.parseMatMulAttributes],["MaxPool","","1+",tw.maxPool,tw.parseMaxPoolAttributes],["Mul","","7+",tl.mul],["Neg","","6+",tF.neg],["Not","","1+",tF.not],["Or","","7+",tl.or],["Pad","","2-10",tx.padV2,tx.parsePadAttributesV2],["Pad","","11+",tx.padV11,tx.parsePadAttributesV11],["Pow","","7+",tl.pow],["PRelu","","7+",tl.pRelu],["ReduceLogSum","","1+",tT.reduceLogSum,tT.parseReduceAttributes],["ReduceMax","","1+",tT.reduceMax,tT.parseReduceAttributes],["ReduceMean","","1+",tT.reduceMean,tT.parseReduceAttributes],["ReduceMin","","1+",tT.reduceMin,tT.parseReduceAttributes],["ReduceProd","","1+",tT.reduceProd,tT.parseReduceAttributes],["ReduceSum","","1-12",tT.reduceSum,tT.parseReduceAttributes],["ReduceSumSquare","","1+",tT.reduceLogSumSquare,tT.parseReduceAttributes],["Relu","","6+",tF.relu],["Reshape","","5+",tS.reshape],["Resize","","10",tO.resize,tO.parseResizeAttributesV10],["Resize","","11+",tO.resize,tO.parseResizeAttributesV11],["Shape","","1+",tA.shape],["Sigmoid","","6+",tF.sigmoid],["Sin","","7+",tF.sin],["Slice","","10+",tE.sliceV10],["Slice","","1-9",tE.slice,tE.parseSliceAttributes],["Softmax","","1-12",tI.softmax,tI.parseSoftmaxAttributes],["Softmax","","13+",tI.softmaxV13,tI.parseSoftmaxAttributesV13],["Split","","2-12",tP.split,tP.parseSplitAttributes],["Sqrt","","6+",tF.sqrt],["Squeeze","","1-12",tD.squeeze,tD.parseSqueezeAttributes],["Squeeze","","13+",tD.squeezeV13],["Sub","","7+",tl.sub],["Sum","","6+",t$.sum],["Tan","","7+",tF.tan],["Tanh","","6+",tF.tanh],["Tile","","6+",tk.tile],["Transpose","","1+",tC.transpose,tC.parseTransposeAttributes],["Upsample","","7-8",tL.upsample,tL.parseUpsampleAttributesV7],["Upsample","","9",tL.upsample,tL.parseUpsampleAttributesV9],["Unsqueeze","","1-12",tN.unsqueeze,tN.parseUnsqueezeAttributes],["Unsqueeze","","13+",tN.unsqueezeV13],["Xor","","7+",tl.xor]]},2898:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseBatchNormalizationAttributes=tn.batchNormalization=void 0;let to=ti(246),ta=ti(5060),ts=ti(2039),tu={name:"BatchNormalization",inputNames:["A","Scale","B","Mean","Variance"],inputTypes:[ts.TextureType.unpacked,ts.TextureType.unpacked,ts.TextureType.unpacked,ts.TextureType.unpacked,ts.TextureType.unpacked]};tn.batchNormalization=(tr,tn,ti)=>(tc(tn),[tr.run(Object.assign(Object.assign({},tu),{cacheHint:ti.cacheKey,get:()=>tl(tr,tn,ti)}),tn)]),tn.parseBatchNormalizationAttributes=tr=>{let tn=tr.attributes.getFloat("epsilon",1e-5),ti=tr.attributes.getFloat("momentum",.9),ta=tr.attributes.getInt("spatial",1);return(0,to.createAttributeWithCacheKey)({epsilon:tn,momentum:ti,spatial:ta})};let tl=(tr,tn,ti)=>{let to=(0,ta.getGlsl)(tr.session.backend.glContext.version),tl=tn[0].dims.length,[tc,tp]=tr.calculateTextureWidthAndHeight(tn[1].dims,ts.TextureType.unpacked),tf=` - float process(int[${tl}] indices) { - vec2 position = offsetToCoords(indices[1], ${tc}, ${tp}); - float scale = getColorAsFloat(${to.texture2D}(Scale, position)); - float mean = getColorAsFloat(${to.texture2D}(Mean, position)); - float variance = getColorAsFloat(${to.texture2D}(Variance, position)); - float b = getColorAsFloat(${to.texture2D}(B, position)); - - return scale * ( (_A(indices) - mean) / sqrt(variance + float(${ti.epsilon})) ) + b; - }`;return Object.assign(Object.assign({},tu),{output:{dims:tn[0].dims,type:tn[0].type,textureType:ts.TextureType.unpacked},shaderSource:tf})},tc=tr=>{if(!tr||5!==tr.length)throw Error("BatchNormalization requires 5 inputs.");let tn=tr[0],ti=tr[1],to=tr[2],ta=tr[3],ts=tr[4];if(tn.dims.length<3||1!==ti.dims.length||1!==to.dims.length||1!==ta.dims.length||1!==ts.dims.length||ti.dims[0]!==tn.dims[1]||to.dims[0]!==tn.dims[1]||ta.dims[0]!==tn.dims[1]||ts.dims[0]!==tn.dims[1])throw Error("invalid input shape.");if("float32"!==tn.type&&"float64"!==tn.type||"float32"!==ti.type&&"float64"!==ti.type||"float32"!==to.type&&"float64"!==to.type||"float32"!==ta.type&&"float64"!==ta.type||"float32"!==ts.type&&"float64"!==ts.type)throw Error("invalid input tensor types.")}},7839:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.xor=tn.sub=tn.pRelu=tn.pow=tn.or=tn.mul=tn.less=tn.greater=tn.equal=tn.div=tn.and=tn.add=tn.glslPRelu=tn.glslPow=tn.glslXor=tn.glslOr=tn.glslAnd=tn.glslLess=tn.glslGreater=tn.glslEqual=tn.glslSub=tn.glslMul=tn.glslDiv=tn.glslAdd=void 0;let to=ti(2517),ta=ti(8520),ts=ti(5060),tu=ti(2039);function tl(){let tr="add_";return{body:` - float ${tr}(float a, float b) { - return a + b; - } - vec4 ${tr}(vec4 v1, vec4 v2) { - return v1 + v2; - } - `,name:tr,type:ta.FunctionType.ValueBased}}function tc(){let tr="div_";return{body:` - float ${tr}(float a, float b) { - return a / b; - } - vec4 ${tr}(vec4 v1, vec4 v2) { - return v1 / v2; - } - `,name:tr,type:ta.FunctionType.ValueBased}}function tp(){let tr="mul_";return{body:` - float ${tr}(float a, float b) { - return a * b; - } - vec4 ${tr}(vec4 v1, vec4 v2) { - return v1 * v2; - } - `,name:tr,type:ta.FunctionType.ValueBased}}function tf(){let tr="sub_";return{body:` - float ${tr}(float a, float b) { - return a - b; - } - vec4 ${tr}(vec4 v1, vec4 v2) { - return v1 - v2; - } - `,name:tr,type:ta.FunctionType.ValueBased}}function td(){let tr="equal_";return{body:` - float ${tr}(float a, float b) { - return float(a == b); - } - vec4 ${tr}(vec4 v1, vec4 v2) { - return vec4(equal(v1, v2)); - } - `,name:tr,type:ta.FunctionType.ValueBased}}function th(){let tr="greater_";return{body:` - float ${tr}(float a, float b) { - return float(a > b); - } - vec4 ${tr}(vec4 v1, vec4 v2) { - return vec4( v1.r > v2.r , - v1.g > v2.g, - v1.b > v2.b, - v1.a > v2.a ); - } - `,name:tr,type:ta.FunctionType.ValueBased}}function tg(){let tr="less_";return{body:` - float ${tr}(float a, float b) { - return float(a < b); - } - vec4 ${tr}(vec4 v1, vec4 v2) { - return vec4( v1.r < v2.r , - v1.g < v2.g, - v1.b < v2.b, - v1.a < v2.a ); - } - `,name:tr,type:ta.FunctionType.ValueBased}}function tb(){let tr="and_";return{body:` - float ${tr}(float a, float b) { - return float( bool(a) && bool(b) ); - } - vec4 ${tr}(vec4 v1, vec4 v2) { - bvec4 b1 = bvec4(v1); - bvec4 b2 = bvec4(v2); - return vec4( b1.r && b2.r , - b1.g && b2.g, - b1.b && b2.b, - b1.a && b2.a ); - } - `,name:tr,type:ta.FunctionType.ValueBased}}function tm(){let tr="or_";return{body:` - float ${tr}(float a, float b) { - return float( bool(a) || bool(b) ); - } - vec4 ${tr}(vec4 v1, vec4 v2) { - bvec4 b1 = bvec4(v1); - bvec4 b2 = bvec4(v2); - return vec4( b1.r || b2.r , - b1.g || b2.g, - b1.b || b2.b, - b1.a || b2.a ); - } - `,name:tr,type:ta.FunctionType.ValueBased}}function ty(){let tr="xor_";return{body:` - float ${tr}(float a, float b) { - return float( bool(a) ^^ bool(b) ); - } - vec4 ${tr}(vec4 v1, vec4 v2) { - bvec4 b1 = bvec4(v1); - bvec4 b2 = bvec4(v2); - return vec4( b1.r ^^ b2.r , - b1.g ^^ b2.g, - b1.b ^^ b2.b, - b1.a ^^ b2.a ); - } - `,name:tr,type:ta.FunctionType.ValueBased}}function t_(){return function(tr){let tn=`${tr}_`;return{body:` - float ${tn}(float a, float b) { - return ${tr}(a, b); - } - vec4 ${tn}(vec4 v1, vec4 v2) { - return ${tr}(v1, v2); - } - `,name:tn,type:ta.FunctionType.ValueBased}}("pow")}function tv(){let tr="prelu_";return{body:` - float ${tr}(float a, float b) { - return a < 0.0 ? a * b: a; - } - vec4 ${tr}(vec4 v1, vec4 v2) { - return vec4( - v1.r < 0.0 ? v1.r * v2.r: v1.r, - v1.g < 0.0 ? v1.g * v2.g: v1.g, - v1.b < 0.0 ? v1.b * v2.b: v1.b, - v1.a < 0.0 ? v1.a * v2.a: v1.a - ); - } - `,name:tr,type:ta.FunctionType.ValueBased}}tn.glslAdd=tl,tn.glslDiv=tc,tn.glslMul=tp,tn.glslSub=tf,tn.glslEqual=td,tn.glslGreater=th,tn.glslLess=tg,tn.glslAnd=tb,tn.glslOr=tm,tn.glslXor=ty,tn.glslPow=t_,tn.glslPRelu=tv;let tx=(tr,tn,ti,to=tn[0].type,ta)=>{let ts=tr.session.pack?tu.TextureType.packed:tu.TextureType.unpacked;return{name:ti.name,inputNames:["A","B"],inputTypes:[ts,ts],cacheHint:ta,get:()=>tw(tr,tn,ti,to)}},tw=(tr,tn,ti,ta=tn[0].type)=>{let tl=tr.session.pack?tu.TextureType.packed:tu.TextureType.unpacked,tc=!to.ShapeUtil.areEqual(tn[0].dims,tn[1].dims),tp=tn[0].dims,tf=tr.session.pack;if(tc){let tu=to.BroadcastUtil.calcShape(tn[0].dims,tn[1].dims,!1);if(!tu)throw Error("Can't perform binary op on the given tensors");tp=tu;let tc=tp.length,td=0!==tn[0].dims.length?tn[0].dims.length:1,th=0!==tn[1].dims.length?tn[1].dims.length:1,tg=0!==tn[0].dims.length?"bcastIndices_A(indices, aindices);":"aindices[0] = 0;",tb=0!==tn[1].dims.length?"bcastIndices_B(indices, bindices);":"bindices[0] = 0;",tm=(0,ts.getGlsl)(tr.session.backend.glContext.version),ty=tf?` - ${ti.body} - void main() { - vec4 a = getAAtOutCoords(); - vec4 b = getBAtOutCoords(); - vec4 result = ${ti.name}(a, b); - ${tm.output} = result; - }`:` - ${ti.body} - float process(int indices[${tc}]) { - int aindices[${td}]; - int bindices[${th}]; - ${tg} - ${tb} - return ${ti.name}(_A(aindices), _B(bindices)); - }`;return{name:ti.name,inputNames:["A","B"],inputTypes:[tl,tl],output:{dims:tp,type:ta,textureType:tl},shaderSource:ty,hasMain:tf}}let td=(0,ts.getGlsl)(tr.session.backend.glContext.version),th=` - ${ti.body} - void main() { - vec4 v1 = ${td.texture2D}(A, TexCoords); - vec4 v2 = ${td.texture2D}(B, TexCoords); - vec4 result = ${ti.name}(v1, v2); - ${td.output} = result; - } - `;return{name:ti.name,inputNames:["A","B"],inputTypes:[tl,tl],output:{dims:tn[0].dims,type:ta,textureType:tl},shaderSource:th,hasMain:!0}};tn.add=(tr,tn)=>[tr.run(tx(tr,tn,tl()),tn)],tn.and=(tr,tn)=>[tr.run(tx(tr,tn,tb(),"bool"),tn)],tn.div=(tr,tn)=>[tr.run(tx(tr,tn,tc()),tn)],tn.equal=(tr,tn)=>[tr.run(tx(tr,tn,td(),"bool"),tn)],tn.greater=(tr,tn)=>[tr.run(tx(tr,tn,th(),"bool"),tn)],tn.less=(tr,tn)=>[tr.run(tx(tr,tn,tg(),"bool"),tn)],tn.mul=(tr,tn)=>[tr.run(tx(tr,tn,tp()),tn)],tn.or=(tr,tn)=>[tr.run(tx(tr,tn,tm(),"bool"),tn)],tn.pow=(tr,tn)=>[tr.run(tx(tr,tn,t_()),tn)],tn.pRelu=(tr,tn)=>[tr.run(tx(tr,tn,tv()),tn)],tn.sub=(tr,tn)=>[tr.run(tx(tr,tn,tf()),tn)],tn.xor=(tr,tn)=>[tr.run(tx(tr,tn,ty(),"bool"),tn)]},4196:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseCastAttributes=tn.cast=void 0;let to=ti(2517);tn.cast=(tr,tn,ti)=>(ta(tn),[tr.cast(tn[0],ti)]),tn.parseCastAttributes=tr=>to.ProtoUtil.tensorDataTypeFromProto(tr.attributes.getInt("to"));let ta=tr=>{if(!tr||1!==tr.length)throw Error("Cast requires 1 input.");if("string"===tr[0].type)throw Error("Invalid input type.")}},1163:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.createPackedConcatProgramInfoLoader=void 0;let to=ti(5060),ta=ti(2039),ts=ti(9390),tu=ti(2827);tn.createPackedConcatProgramInfoLoader=(tr,tn,ti)=>{var tc,tp;let tf=(tc=tn.length,tp=ti.cacheKey,{name:"Concat (packed)",inputNames:Array.from({length:tc},(tr,tn)=>`X${tn}`),inputTypes:Array(tc).fill(ta.TextureType.packed),cacheHint:tp});return Object.assign(Object.assign({},tf),{get:()=>((tr,tn,ti,tc)=>{let tp=ti[0].dims.slice();if(tc>=tp.length||tc<-1*tp.length)throw Error("axis specified for concat doesn't match input dimensionality");tc<0&&(tc=tp.length+tc);let tf=tp.slice(0);for(let tr=1;trtr.dims),ty=(0,ts.getGlChannels)(td),t_=Array(tm.length-1);t_[0]=tm[0][tc];for(let tr=1;tr= ${t_[tr-1]}) { - return getChannel( - getX${tr}(${tl(ty,tv,tn)}), - vec2(${tl(tx,tv,tn)})); - }`}let tS=t_.length,tO=t_[t_.length-1];tT+=` - return getChannel( - getX${tS}(${tl(ty,tv,tO)}), - vec2(${tl(tx,tv,tO)}));`;let tA=(0,to.getGlsl)(tr.session.backend.glContext.version),tE=` - ${tb} - float getValue(${ty.map(tr=>"int "+tr)}) { - ${tT} - } - - void main() { - ${tg} coords = getOutputCoords(); - int lastDim = coords.${ty[td-1]}; - coords.${ty[td-1]} = coords.${ty[td-2]}; - coords.${ty[td-2]} = lastDim; - - vec4 result = vec4(getValue(${th}), 0., 0., 0.); - - ${th[td-1]} = ${th[td-1]} + 1; - if (${th[td-1]} < ${tf[td-1]}) { - result.g = getValue(${th}); - } - - ${th[td-2]} = ${th[td-2]} + 1; - if (${th[td-2]} < ${tf[td-2]}) { - result.a = getValue(${th}); - } - - ${th[td-1]} = ${th[td-1]} - 1; - if (${th[td-2]} < ${tf[td-2]} && - ${th[td-1]} < ${tf[td-1]}) { - result.b = getValue(${th}); - } - ${tA.output} = result; - } - `;return Object.assign(Object.assign({},tn),{output:{dims:tf,type:ti[0].type,textureType:ta.TextureType.packed},shaderSource:tE,hasMain:!0})})(tr,tf,tn,ti.axis)})};let tl=(tr,tn,ti)=>{let to=tr.indexOf(tn);return tr.map((tr,tn)=>tn===to?`${tr} - ${ti}`:tr).join()}},2069:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseConcatAttributes=tn.concat=void 0;let to=ti(246),ta=ti(2039),ts=ti(1163);tn.concat=(tr,tn,ti)=>(td(tn),tr.session.pack&&tn[0].dims.length>1?[tr.run((0,ts.createPackedConcatProgramInfoLoader)(tr,tn,ti),tn)]:[tr.run(tu(tr,tn,ti),tn)]);let tu=(tr,tn,ti)=>{var to,ts;let tu=(to=tn.length,ts=ti.cacheKey,{name:"Concat",inputNames:Array.from({length:to},(tr,tn)=>`X${tn}`),inputTypes:Array(to).fill(ta.TextureType.unpacked),cacheHint:ts});return Object.assign(Object.assign({},tu),{get:()=>((tr,tn,ti,to)=>{let ts=ti[0].dims.slice();if(to>=ts.length||to<-1*ts.length)throw Error("axis specified for concat doesn't match input dimensionality");to<0&&(to=ts.length+to);let tu=ts.slice(0);for(let tr=1;tr`int getTextureWhereDataResides(int index) { - ${tr.map((tr,tn)=>`if(index<${tr}) {return ${tn};} -`).join("")} - }`,tc=tr=>tl(tr),tp=(tr,tn)=>{let ti=[`float fetchDataFromCorrectTexture(int textureIndex, int indices[${tn}]) {`];for(let tn=0;tn{let tn=["int getSizeInConcatAxisValueFromIndex(int index) {"];for(let ti=0;ti(0,to.createAttributeWithCacheKey)({axis:tr.attributes.getInt("axis")});let td=tr=>{if(!tr||tr.length<1)throw Error("too few inputs");let tn=tr[0].type,ti=tr[0].dims.length;if("string"===tn)throw Error("string tensor is not supported yet");for(let to of tr){if(to.type!==tn)throw Error("input tensors should be one type");if(to.dims.length!==ti)throw Error("input tensors should have the same shape")}}},4770:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.createUnpackedGroupedConvProgramInfoLoader=void 0;let to=ti(6231),ta=ti(5060),ts=ti(2039),tu=ti(8138),tl=ti(2823);tn.createUnpackedGroupedConvProgramInfoLoader=(tr,tn,ti)=>{var tc,tp;let tf=(tc=tn.length>2,tp=ti.cacheKey,{name:"GroupedConv",inputNames:tc?["X","W","Bias"]:["X","W"],inputTypes:tc?[ts.TextureType.unpacked,ts.TextureType.unpacked,ts.TextureType.unpacked]:[ts.TextureType.unpacked,ts.TextureType.unpacked],cacheHint:tp});return Object.assign(Object.assign({},tf),{get:()=>((tr,tn,ti,tc)=>{let tp=tn.length>2?"value += getBias(output_channel);":"",tf=tn[0].dims.slice(),td=tn[1].dims.slice(),th=td[0]/tc.group;to.Logger.verbose("GroupedConv",`autpPad:${tc.autoPad}, dilations:${tc.dilations}, group:${tc.group}, kernelShape:${tc.kernelShape}, pads:${tc.pads}, strides:${tc.strides}`);let tg=(0,tu.calculateOutputShape)(tf,td,tc.dilations,tc.pads,tc.strides),tb=(0,ta.getGlsl)(tr.session.backend.glContext.version),{activationFunction:tm,applyActivation:ty}=(0,tl.getActivationSnippet)(tc),t_=` - const ivec2 strides = ivec2(${tc.strides[0]}, ${tc.strides[1]}); - const ivec2 pads = ivec2(${tc.pads[0]}, ${tc.pads[1]}); - ${tm} - void main() { - ivec4 coords = getOutputCoords(); - int batch = coords.x; - int output_channel = coords.y; - ivec2 xRCCorner = coords.zw * strides - pads; - int group_id = output_channel / ${th}; - - float value = 0.0; - for (int wInChannel = 0; wInChannel < ${td[1]}; wInChannel++) { - int input_channel = group_id * ${td[1]} + wInChannel; - for (int wHeight = 0; wHeight < ${td[2]}; wHeight++) { - int xHeight = xRCCorner.x + wHeight * ${tc.dilations[0]}; - - if (xHeight < 0 || xHeight >= ${tf[2]}) { - continue; - } - - for (int wWidth = 0; wWidth < ${td[3]}; wWidth++) { - int xWidth = xRCCorner.y + wWidth * ${tc.dilations[1]}; - if (xWidth < 0 || xWidth >= ${tf[3]}) { - continue; - } - - float xVal = getX(batch, input_channel, xWidth, xHeight); - float wVal = getW(output_channel, wInChannel, wWidth, wHeight); - value += xVal*wVal; - } - } - } - ${tp} - ${ty} - ${tb.output} = vec4(value, .0, .0, .0); - } -`;return Object.assign(Object.assign({},ti),{output:{dims:tg,type:tn[0].type,textureType:ts.TextureType.unpacked},shaderSource:t_,hasMain:!0})})(tr,tn,tf,ti)})}},1386:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.conv2DPacked=tn.conv2DPackedPointwise=void 0;let to=ti(8138),ta=ti(8555),ts=ti(708);tn.conv2DPackedPointwise=(tr,tn,ti)=>{let ta=tn[0].dims,tu=tn[1].dims,tl=(0,to.calculateOutputShape)(ta,tu,ti.dilations,ti.pads,ti.strides),tc=tr.reshapePacked(tn[0],[ta[1],ta[2]*ta[3]]),tp=tr.reshapePacked(tn[1],[tu[0],tu[1]]),tf=tn.length>2?[tp,tc,tn[2]]:[tp,tc],td=tr.run((0,ts.createPackedMatmulProgramInfoLoader)(tr,tf,ti),tf);return tr.reshapePacked(td,tl)},tn.conv2DPacked=(tr,tn,ti)=>{let tu=tn[0].dims,tl=tn[1].dims,tc=(0,to.calculateOutputShape)(tu,tl,ti.dilations,ti.pads,ti.strides),tp=tr.run((0,ta.createPackedIm2ColProgramInfoLoader)(tr,tn[0],tn[1],tc,ti),[tn[0]]),tf=tr.reshapePacked(tn[1],[tl[0],tl[1]*tl[2]*tl[3]]),td=3===tn.length?[tf,tp,tn[2]]:[tf,tp],th=tr.run((0,ts.createPackedMatmulProgramInfoLoader)(tr,td,ti),td);return tr.reshapePacked(th,tc)}},9663:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseConvTransposeAttributes=tn.convTranspose=void 0;let to=ti(246),ta=ti(5060),ts=ti(2039),tu=ti(2823),tl=(tr,tn,ti,to,ta,ts)=>(tr-1)*tn+ti+(to-1)*ta+1-ts,tc=(tr,tn,ti,to,ta)=>{let ts=Math.floor(tr/2);"SAME_UPPER"===tn?(ti[to]=ts,ti[ta]=tr-ts):"SAME_LOWER"===tn&&(ti[to]=tr-ts,ti[ta]=ts)};tn.convTranspose=(tr,tn,ti)=>(th(tn,ti),tp(tr,tn,ti));let tp=(tr,tn,ti)=>{let to=td(ti,tn);return[tf(tr,tn,to)]},tf=(tr,tn,ti)=>tr.run(((tr,tn,ti)=>{var to,tl;let tc=(to=tn.length>2,tl=ti.cacheKey,{name:"ConvTranspose",inputNames:to?["X","W","B"]:["X","W"],inputTypes:to?[ts.TextureType.unpacked,ts.TextureType.unpacked,ts.TextureType.unpacked]:[ts.TextureType.unpacked,ts.TextureType.unpacked],cacheHint:tl});return Object.assign(Object.assign({},tc),{get:()=>((tr,tn,ti,to)=>{let tl=tn.length>2?"getB(output_channel)":"0.0",tc=tn[0].dims,tp=tn[1].dims,tf=tp[1],td=tp[0]/to.group,th=[tn[0].dims[0],tn[1].dims[1]*to.group,...to.outputShape],tg=(0,ta.getGlsl)(tr.session.backend.glContext.version),{activationFunction:tb,applyActivation:tm}=(0,tu.getActivationSnippet)(to),ty=` - const ivec2 strides = ivec2(${to.strides[0]}, ${to.strides[1]}); - const ivec2 pads = ivec2(${to.pads[0]}, ${to.pads[1]}); - ${tb} - void main() { - ivec4 coords = getOutputCoords(); - int batch = coords.x; - int output_channel = coords.y; - - ivec2 loc = coords.zw + pads; - - int group_id = output_channel / ${tf}; - int wOutChannel = output_channel - group_id * ${tf}; - - float value = ${tl}; - for (int inChannelOffset = 0; inChannelOffset < ${td}; inChannelOffset++) { - int input_channel = group_id * ${td} + inChannelOffset; - for (int wWOff = 0; wWOff < ${tp[2]}; wWOff++) { - for (int wHOff = 0; wHOff < ${tp[3]}; wHOff++) { - ivec2 wOff = ivec2(wWOff * ${to.dilations[0]}, wHOff * ${to.dilations[1]}); - ivec2 wLoc = loc - wOff; - ivec2 wLocIn = wLoc / strides; - if ( - wLocIn * strides == wLoc && - wLocIn.x >= 0 && wLocIn.x < ${tc[2]} && - wLocIn.y >= 0 && wLocIn.y < ${tc[3]} - ) { - float xVal = getX(batch, input_channel, wLocIn.y, wLocIn.x); - float wVal = getW(input_channel, wOutChannel, wHOff, wWOff); - value += xVal * wVal; - } - } - } - } - ${tm} - ${tg.output} = vec4(value, .0, .0, .0); - } -`;return Object.assign(Object.assign({},ti),{output:{dims:th,type:tn[0].type,textureType:ts.TextureType.unpacked},shaderSource:ty,hasMain:!0})})(tr,tn,tc,ti)})})(tr,tn,ti),tn),td=(tr,tn)=>{let ti=tr.kernelShape.slice();if(0===tr.kernelShape.length)for(let tr=2;tr{let tf=tr.length-2,td=0===tp.length;for(let th=0;th{let tn=tr.attributes,ti=(0,tu.parseInternalActivationAttributes)(tn),ta=tn.getString("auto_pad","NOTSET"),ts=tn.getInts("dilations",[1,1]),tl=tn.getInt("group",1),tc=tn.getInts("kernel_shape",[]),tp=tn.getInts("output_padding",[0,0]),tf=tn.getInts("output_shape",[]),td=tn.getInts("pads",[0,0,0,0]),th=tn.getInts("strides",[1,1]);return(0,to.createAttributeWithCacheKey)(Object.assign({autoPad:ta,dilations:ts,group:tl,kernelShape:tc,outputPadding:tp,outputShape:tf,pads:td,strides:th},ti))};let th=(tr,tn)=>{if(!tr||2!==tr.length&&3!==tr.length)throw Error("Conv requires 2 or 3 inputs");if(4!==tr[0].dims.length||4!==tr[1].dims.length)throw Error("currently only support 2-dimensional conv");if(tr[0].dims[1]!==tr[1].dims[0])throw Error("FILTER_IN_CHANNEL should be equal to DATA_CHANNEL");let ti=tr[1].dims[1]*tn.group;if(3===tr.length&&(1!==tr[2].dims.length||tr[2].dims[0]!==ti))throw Error("invalid bias");let to=tr[0].dims.length-2;if(tn.dilations.length!==to)throw Error(`dilations should be ${to}D`);if(tn.strides.length!==to)throw Error(`strides should be ${to}D`);if(tn.pads.length!==2*to)throw Error(`pads should be ${2*to}D`);if(tn.outputPadding.length!==to)throw Error(`output_padding should be ${to}D`);if(0!==tn.kernelShape.length&&tn.kernelShape.length!==tr[1].dims.length-2)throw Error("invalid kernel shape");if(0!==tn.outputShape.length&&tn.outputShape.length!==tr[0].dims.length-2)throw Error("invalid output shape");if("float32"!==tr[0].type||"float32"!==tr[1].type)throw Error("ConvTranspose input(X,W) should be float tensor");if(3===tr.length&&"float32"!==tr[2].type)throw Error("ConvTranspose input(bias) should be float tensor")}},8138:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseConvAttributes=tn.conv=tn.calculateOutputShape=void 0;let to=ti(246),ta=ti(2517),ts=ti(4770),tu=ti(1386),tl=ti(9828),tc=ti(2823),tp=ti(3248),tf=ti(5623);tn.calculateOutputShape=(tr,tn,ti,to,ta)=>{let ts=tr[0],tu=tr.slice(2),tl=tu.length,tc=tn[0],tp=tn.slice(2).map((tr,tn)=>tr+(tr-1)*(ti[tn]-1)),tf=tu.map((tr,tn)=>tr+to[tn]+to[tn+tl]).map((tr,tn)=>Math.floor((tr-tp[tn]+ta[tn])/ta[tn]));return[ts,tc].concat(...tf)},tn.conv=(tr,tn,ti)=>(tm(tn,ti),td(tr,tn,ti));let td=(tr,tn,ti)=>{let to=tb(ti,tn),ta=tr.session.pack,tl=1===to.kernelShape[0]&&1===to.kernelShape[1];return to.group>1?[tr.run((0,ts.createUnpackedGroupedConvProgramInfoLoader)(tr,tn,to),tn)]:tl&&ta?[th(tr,tn,to)]:ta&&4===tn[0].dims.length&&1===tn[0].dims[0]&&!tl?[(0,tu.conv2DPacked)(tr,tn,to)]:[tg(tr,tn,to)]},th=(tr,ti,to)=>{let ta=ti[0].dims,ts=ti[1].dims,tu=(0,tn.calculateOutputShape)(ta,ts,to.dilations,to.pads,to.strides),tl=tr.reshapeUnpacked(ti[0],[ta[1],ta[2]*ta[3]]),tc=tr.reshapeUnpacked(ti[1],[ts[0],ts[1]]),tp=ti.length>2?[tc,tl,ti[2]]:[tc,tl],td=tr.run((0,tf.createMatmulProgramInfoLoader)(tp,to),tp);return tr.reshapeUnpacked(td,tu)},tg=(tr,ti,to)=>{let ta=ti[0].dims,ts=ti[1].dims,tu=(0,tn.calculateOutputShape)(ta,ts,to.dilations,to.pads,to.strides),tc=tr.run((0,tp.createIm2ColProgramInfoLoader)(tr,ti[0],ti[1],tu,to),[ti[0]]),tf=3===ti.length?[tc,ti[1],ti[2]]:[tc,ti[1]];return tr.run((0,tl.createDotProductProgramInfoLoader)(tr,ti,tu,to),tf)},tb=(tr,tn)=>{let ti=tr.kernelShape.slice();if(0===tr.kernelShape.length)for(let tr=2;tr{let tn=tr.attributes,ti=(0,tc.parseInternalActivationAttributes)(tn),ta=tn.getString("auto_pad","NOTSET"),ts=tn.getInts("dilations",[1,1]),tu=tn.getInt("group",1),tl=tn.getInts("kernel_shape",[]),tp=tn.getInts("pads",[0,0,0,0]),tf=tn.getInts("strides",[1,1]);return(0,to.createAttributeWithCacheKey)(Object.assign({autoPad:ta,dilations:ts,group:tu,kernelShape:tl,pads:tp,strides:tf},ti))};let tm=(tr,tn)=>{if(!tr||2!==tr.length&&3!==tr.length)throw Error("Conv requires 2 or 3 inputs");if(4!==tr[0].dims.length||4!==tr[1].dims.length)throw Error("currently only support 2-dimensional conv");if(tr[0].dims[1]!==tr[1].dims[1]*tn.group)throw Error("FILTER_IN_CHANNEL should be equal to DATA_CHANNEL");if(3===tr.length&&(1!==tr[2].dims.length||tr[1].dims[0]!==tr[2].dims[0]))throw Error("invalid bias");let ti=tr[0].dims.length-2;if(tn.dilations.length!==ti)throw Error(`dilations should be ${ti}D`);if(tn.strides.length!==ti)throw Error(`strides should be ${ti}D`);if(tn.pads.length!==2*ti)throw Error(`pads should be ${2*ti}D`);if(0!==tn.kernelShape.length&&tn.kernelShape.length!==tr[1].dims.length-2)throw Error("invalid kernel shape");if("float32"!==tr[0].type||"float32"!==tr[1].type)throw Error("Conv input(X,W) should be float tensor");if(3===tr.length&&"float32"!==tr[2].type)throw Error("Conv input(bias) should be float tensor")}},5193:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseDepthToSpaceAttributes=tn.depthToSpace=void 0;let to=ti(3738);tn.depthToSpace=(tr,tn,ti)=>{ta(tn);let ts=ti.blocksize,tu=ts*ts,tl="DCR"===ti.mode?[0,3,4,1,5,2]:[0,1,4,2,5,3],tc="DCR"===ti.mode?[tn[0].dims[0],ts,ts,tn[0].dims[1]/tu,tn[0].dims[2],tn[0].dims[3]]:[tn[0].dims[0],tn[0].dims[1]/tu,ts,ts,tn[0].dims[2],tn[0].dims[3]],tp=tr.reshapeUnpacked(tn[0],tc),tf={perm:tl,cacheKey:`${tl}`},[td]=(0,to.transpose)(tr,[tp],tf),th=[tn[0].dims[0],tn[0].dims[1]/tu,tn[0].dims[2]*ts,tn[0].dims[3]*ts];return[tr.reshapeUnpacked(td,th)]},tn.parseDepthToSpaceAttributes=tr=>{let tn=tr.attributes.getInt("blocksize");if(tn<1)throw Error(`blocksize must be >= 1, but got : ${tn} for DepthToSpace`);let ti=tr.attributes.getString("mode","DCR");if("DCR"!==ti&&"CRD"!==ti)throw Error(`unrecognized mode: ${ti} for DepthToSpace`);return{mode:ti,blocksize:tn}};let ta=tr=>{if(1!==tr.length)throw Error(`DepthToSpace expect 1 inputs, but got ${tr.length}`);if("string"===tr[0].type||4!==tr[0].dims.length)throw TypeError("DepthToSpace input should be a 4-D numeric tensor")}},9828:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.createDotProductProgramInfoLoader=void 0;let to=ti(2517),ta=ti(5060),ts=ti(2039),tu=ti(2823),tl=ti(3248);tn.createDotProductProgramInfoLoader=(tr,tn,ti,tc)=>{var tp,tf;let td=(tp=tn.length>2,tf=tc,{name:"ConvDotProduct",inputNames:tp?["Im2Col","K","B"]:["Im2Col","K"],inputTypes:tp?[ts.TextureType.unpacked,ts.TextureType.packedLastDimension,ts.TextureType.unpacked]:[ts.TextureType.unpacked,ts.TextureType.packedLastDimension],cacheKey:tf.activationCacheKey});return Object.assign(Object.assign({},td),{get:()=>((tr,tn,ti,tc,tp)=>{let tf=ti[0].dims,td=ti[1].dims,th=[td[0],Math.ceil(tf[1]*td[2]*td[3]/4)],tg=(0,tl.calculateIm2ColDims)(tf,td,tc),[tb,tm]=tr.calculateTextureWidthAndHeight(th,ts.TextureType.packedLastDimension),ty=to.ShapeUtil.computeStrides(tg),[t_,tv]=tr.calculateTextureWidthAndHeight(tg,ts.TextureType.packedLastDimension),tx=tc.length,tw=ti.length<3?"0.0":"_B(b)",tT=Math.ceil(tf[1]*td[2]*td[3]/4),{activationFunction:tS,applyActivation:tO}=(0,tu.getActivationSnippet)(tp),tA=(0,ta.getGlsl)(tr.session.backend.glContext.version),tE=` -${tS} -float process(int indices[${tx}]) { - int b[1]; - b[0] = indices[1]; - int im2col[4]; - im2col[0] = indices[0]; - im2col[1] = indices[2]; - im2col[2] = indices[3]; - int im2colOffset = im2col[0] * ${ty[0]} + im2col[1] * ${ty[1]} + im2col[2] * ${ty[2]}; - int kernelOffset = indices[1] * ${th[1]}; - float value = ${tw}; - for (int i = 0; i < ${tT}; ++i) { - vec2 im2colCoords = offsetToCoords(im2colOffset, ${t_}, ${tv}); - vec2 kernelCoords = offsetToCoords(kernelOffset, ${tb}, ${tm}); - value += dot(${tA.texture2D}(Im2Col, im2colCoords), ${tA.texture2D}(K, kernelCoords)); - ++im2colOffset; - ++kernelOffset; - } - ${tO} - return value; -}`;return Object.assign(Object.assign({},tn),{output:{dims:tc,type:ti[0].type,textureType:ts.TextureType.unpacked},shaderSource:tE})})(tr,td,tn,ti,tc)})}},7992:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseFlattenAttributes=tn.flatten=void 0;let to=ti(2517);tn.flatten=(tr,tn,ti)=>{ta(tn,ti);let ts=to.ShapeUtil.flattenShape(tn[0].dims,ti);return[tr.reshapeUnpacked(tn[0],ts)]},tn.parseFlattenAttributes=tr=>tr.attributes.getInt("axis",1);let ta=(tr,tn)=>{if(!tr||1!==tr.length)throw Error("Flatten requires 1 input.");let ti=tr[0].dims.length;if(0===ti)throw Error("scalar tensor is not supported.");if(tn<-ti||tn>ti)throw Error("Invalid axis");if("string"===tr[0].type)throw Error("string tensor is not supported.")}},2823:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseInternalActivationAttributes=tn.getActivationSnippet=void 0;let to=ti(2517),ta=ti(4909);tn.getActivationSnippet=function(tr){let tn;switch(tr.activation){case"Relu":tn=(0,ta.glslRelu)();break;case"Sigmoid":tn=(0,ta.glslSigmoid)();break;case"Clip":tn=(0,ta.glslClip)(tr.clipMin,tr.clipMax);break;default:return{activationFunction:"",applyActivation:""}}let ti=tn.name;return{activationFunction:tn.body,applyActivation:`value = ${ti}_(value);`}},tn.parseInternalActivationAttributes=tr=>{let tn=tr.getString("activation","");if("Clip"===tn){let[ti,ta]=tr.getFloats("activation_params",[to.MIN_CLIP,to.MAX_CLIP]);return{activation:tn,clipMax:ta,clipMin:ti,activationCacheKey:`${tn}:${ti},${ta}`}}return{activation:tn,activationCacheKey:tn}}},1253:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseGatherAttributes=tn.gather=void 0;let to=ti(246),ta=ti(782),ts=ti(2517),tu=ti(2039);tn.gather=(tr,tn,ti)=>(tp(tn,ti.axis),[tr.run(tc(tr,tn,ti),tn)]),tn.parseGatherAttributes=tr=>(0,to.createAttributeWithCacheKey)({axis:tr.attributes.getInt("axis",0)});let tl={name:"Gather",inputNames:["A","B"],inputTypes:[tu.TextureType.unpacked,tu.TextureType.unpacked]},tc=(tr,tn,ti)=>{let to=Object.assign(Object.assign({},tl),{cacheHint:ti.cacheKey});return Object.assign(Object.assign({},to),{get:()=>((tr,tn,ti,to)=>{let ta=ti[0].dims.slice(),tl=ti[1].dims.slice(),tc=Array(ta.length+tl.length-1);to=ts.ShapeUtil.normalizeAxis(to,ta.length);let tp=[];for(let tr=0;tr{if(!tr||2!==tr.length)throw Error("Gather requires 2 inputs.");let ti=tr[0].dims.length;if(ti<1)throw Error("Invalid input shape.");if(tn<-ti||tn>ti-1)throw Error("Invalid axis.");if(-1===ta.NUMBER_TYPES.indexOf(tr[0].type)||"int32"!==tr[1].type&&"int16"!==tr[1].type)throw Error("Invaid input type.")}},4776:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseGemmAttributesV11=tn.parseGemmAttributesV7=tn.gemm=void 0;let to=ti(246),ta=ti(2517),ts=ti(2039);tn.gemm=(tr,tn,ti)=>(tp(tn,ti),[tr.run(tl(tn,ti),tn)]);let tu=(tr,tn)=>{let ti=0!==tr.attributes.getInt("transA",0),ta=0!==tr.attributes.getInt("transB",0),ts=tr.attributes.getFloat("alpha",1),tu=tr.attributes.getFloat("beta",1);return(0,to.createAttributeWithCacheKey)({transA:ti,transB:ta,alpha:ts,beta:tu,isOptionalC:tn})};tn.parseGemmAttributesV7=tr=>tu(tr,!1),tn.parseGemmAttributesV11=tr=>tu(tr,!0);let tl=(tr,tn)=>{let ti={name:"Gemm",inputNames:3===tr.length?["A","B","C"]:["A","B"],inputTypes:3===tr.length?[ts.TextureType.unpacked,ts.TextureType.unpacked,ts.TextureType.unpacked]:[ts.TextureType.unpacked,ts.TextureType.unpacked],key:tn.cacheKey};return Object.assign(Object.assign({},ti),{get:()=>tc(ti,tr,tn)})},tc=(tr,tn,ti)=>{let to=tn[0].dims.slice(),tu=tn[1].dims.slice(),[tl,tc]=ta.GemmUtil.getShapeOfGemmResult(to,ti.transA,tu,ti.transB,3===tn.length?tn[2].dims:void 0),tp=[tl,tc];if(!tp)throw Error("Can't use gemm on the given tensors");let tf=to[to.length-1],td="";ti.transA&&(tf=to[0]),ti.transA&&ti.transB?td="value += _A_T(a) * _B_T(b);":ti.transA&&!ti.transB?td="value += _A_T(a) * _B(b);":!ti.transA&&ti.transB?td="value += _A(a) * _B_T(b);":ti.transA||ti.transB||(td="value += _A(a) * _B(b);");let th=tp.length,tg=` - float process(int indices[${th}]) { - int a[${th}]; - int b[${th}]; - ${3===tn.length?`int c[${tn[2].dims.length}];`:""} - - copyVec(indices, a); - copyVec(indices, b); - ${3===tn.length?"bcastIndices_C(indices, c);":""} - - float value = 0.0; - for (int k=0; k<${tf}; ++k) { - a[${th-1}] = k; - b[${th-2}] = k; - ${td} - } - - value = value * alpha; - ${3===tn.length?"value += beta * _C(c);":""} - return value; - }`;return Object.assign(Object.assign({},tr),{output:{dims:tp,type:tn[0].type,textureType:ts.TextureType.unpacked},variables:[{name:"alpha",type:"float",data:ti.alpha},{name:"beta",type:"float",data:ti.beta}],shaderSource:tg})},tp=(tr,tn)=>{if(!tr)throw Error("Input is missing");if(tn.isOptionalC&&(tr.length<2||tr.length>3))throw Error("Invaid input shape.");if(!tn.isOptionalC&&3!==tr.length)throw Error("Gemm requires 3 inputs");if(3===tr.length&&1!==tr[2].dims.length&&2!==tr[2].dims.length)throw Error("Invalid input shape of C");if("float32"!==tr[0].type&&"float64"!==tr[0].type||"float32"!==tr[1].type&&"float64"!==tr[1].type||3===tr.length&&"float32"!==tr[2].type&&"float64"!==tr[2].type)throw Error("Invalid input type.");if(tr[0].type!==tr[1].type||3===tr.length&&tr[0].type!==tr[2].type)throw Error("Input types are mismatched")}},8555:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.createPackedIm2ColProgramInfoLoader=void 0;let to=ti(5060),ta=ti(2039),ts=ti(2827);tn.createPackedIm2ColProgramInfoLoader=(tr,tn,ti,tu,tl)=>{var tc;let tp=(tc=tl.cacheKey,{name:"Im2Col (packed)",inputNames:["A"],inputTypes:[ta.TextureType.packed],cacheHint:tc});return Object.assign(Object.assign({},tp),{get:()=>((tr,tn,ti,tu,tl,tc)=>{let tp=ti.dims,tf=tu.dims,td=tl.length,th=[tf[1]*tf[2]*tf[3],tl[2]*tl[3]],tg=tf[2]*tf[3],tb=(0,ts.unpackFromChannel)(),tm=(0,to.getGlsl)(tr.session.backend.glContext.version),ty="";for(let tr=0;tr<=1;tr++)for(let tn=0;tn<=1;tn++)ty+=` - blockIndex = rc.x + ${tn}; - pos = rc.y + ${tr}; - - if(blockIndex < ${th[1]} && pos < ${th[0]}) { - offsetY = int(blockIndex / (${tl[td-1]})) * ${tc.strides[0]} - - ${tc.pads[0]}; - d0 = offsetY + ${tc.dilations[0]} * (imod(pos, ${tg}) / ${tf[2]}); - - if(d0 < ${tp[2]} && d0 >= 0) { - offsetX = imod(blockIndex, ${tl[td-1]}) * ${tc.strides[1]} - - ${tc.pads[1]}; - d1 = offsetX + ${tc.dilations[1]} * imod(imod(pos, ${tg}), ${tf[2]}); - - if(d1 < ${tp[3]} && d1 >= 0) { - - ch = int(float(pos)/ ${tg}.); - innerDims = vec2(d0, d1); - result[${2*tr+tn}] = getChannel( - getA(0, ch, int(innerDims.x), - int(innerDims.y)), innerDims); - } - } - } - - `;let t_=` - ${tb} - - void main() { - ivec2 rc = getOutputCoords(); - vec4 result = vec4(0.0); - int blockIndex, pos, offsetY, d0, offsetX, d1, ch; - vec2 innerDims; - ${ty} - ${tm.output} = result; - } - `;return Object.assign(Object.assign({},tn),{output:{dims:th,type:ti.type,textureType:ta.TextureType.packed},shaderSource:t_,hasMain:!0})})(tr,tp,tn,ti,tu,tl)})}},3248:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.calculateIm2ColDims=tn.createIm2ColProgramInfoLoader=void 0;let to=ti(2039);tn.createIm2ColProgramInfoLoader=(tr,ti,ta,ts,tu)=>{var tl;let tc=(tl=tu.cacheKey,{name:"Im2Col",inputNames:["X"],inputTypes:[to.TextureType.unpacked],cacheHint:tl});return Object.assign(Object.assign({},tc),{get:()=>((tr,ti,ta,ts,tu,tl)=>{let tc=ta.dims,tp=ts.dims,tf=tu.length,td=(0,tn.calculateIm2ColDims)(tc,tp,tu,4),th=` - const int XC = ${tc[1]}; - const int XH = ${tc[2]}; - const int XW = ${tc[3]}; - const int KH = ${tl.kernelShape[0]}; - const int KW = ${tl.kernelShape[1]}; - const int dilationH = ${tl.dilations[0]}; - const int dilationW = ${tl.dilations[1]}; - const int strideH = ${tl.strides[0]}; - const int strideW = ${tl.strides[1]}; - const int padH = ${tl.pads[0]}; - const int padW = ${tl.pads[1]}; - const int KHKW = KH*KW; - const int XCKHKW = XC * KHKW; - const int outputChannels = 4; - vec4 process(int indices[${tf}]) { - int b = indices[0]; // batch size - int oh = indices[1] * strideH - padH; //output height - int ow = indices[2] * strideW - padW; //output width - int p = indices[3] * outputChannels; //patch - vec4 value = vec4(0.0); - for(int i=0; i < outputChannels; ++i) { - if(p < XCKHKW) { - int patchC = p / KHKW; - int patchH = (p - patchC*KHKW) / KW; - int patchW = (p - patchC*KHKW) - patchH * KW; - int xh2 = oh + patchH * dilationH; - int xw2 = ow + patchW * dilationW; - int x[${tc.length}]; - x[0] = b; - x[1] = patchC; - x[2] = xh2; - x[3] = xw2; - if(xh2 >= 0 && - xh2 < XH && - xw2 >= 0 && - xw2 < XW) { - value[i] = _X(x); - } - } - ++p; - } - return value; - } - `;return Object.assign(Object.assign({},ti),{output:{dims:td,type:ta.type,textureType:to.TextureType.packedLastDimension},shaderSource:th})})(0,tc,ti,ta,ts,tu)})},tn.calculateIm2ColDims=(tr,tn,ti,to=4)=>[ti[0],ti[2],ti[3],Math.ceil(tr[1]*tn[2]*tn[3]/to)]},6572:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseImageScalerAttributes=tn.imageScaler=void 0;let to=ti(246),ta=ti(2039);tn.imageScaler=(tr,tn,ti)=>(tc(tn),[tr.run(tu(tr,tn,ti),tn)]),tn.parseImageScalerAttributes=tr=>{let tn=tr.attributes.getFloat("scale"),ti=tr.attributes.getFloats("bias");return(0,to.createAttributeWithCacheKey)({scale:tn,bias:ti})};let ts={name:"ImageScaler",inputNames:["X"],inputTypes:[ta.TextureType.unpacked]},tu=(tr,tn,ti)=>{let to=Object.assign(Object.assign({},ts),{cacheHint:ti.cacheKey});return Object.assign(Object.assign({},to),{get:()=>((tr,tn,ti,to)=>{let ts=ti[0].dims.slice(),tu=ts.length,tc=` - ${tl(to.bias.length)} - float process(int indices[${tu}]) { - return _X(indices) * scale + getBias(bias, indices[1]); - }`;return Object.assign(Object.assign({},tn),{output:{dims:ts,type:ti[0].type,textureType:ta.TextureType.unpacked},variables:[{name:"bias",type:"float",arrayLength:to.bias.length,data:to.bias},{name:"scale",type:"float",data:to.scale}],shaderSource:tc})})(0,to,tn,ti)})},tl=tr=>{let tn=[`float getBias(float bias[${tr}], int channel) {`];for(let ti=0;ti{if(!tr||1!==tr.length)throw Error("ImageScaler requires 1 input.");if(4!==tr[0].dims.length)throw Error("Invalid input shape.");if("float32"!==tr[0].type&&"float64"!==tr[0].type)throw Error("Invalid input type.")}},3346:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseInstanceNormalizationAttributes=tn.instanceNormalization=void 0;let to=ti(5060),ta=ti(2039);tn.instanceNormalization=(tr,tn,ti)=>{tp(tn);let to=tr.run(tu(tn[0]),tn);return[tr.run(tc(tr,tn[0],ti,to.dims),[tn[0],to,tn[1],tn[2]])]},tn.parseInstanceNormalizationAttributes=tr=>tr.attributes.getFloat("epsilon",1e-5);let ts={name:"InstanceNormalization_MeanAndVariance",inputNames:["X"],inputTypes:[ta.TextureType.unpacked]},tu=tr=>Object.assign(Object.assign({},ts),{get:()=>((tr,tn)=>{let ti=tn.dims.slice(),to=ti[1],ts=ti[2]*ti[3],tu=[ti[0],to],tl=` - vec4 process(int[2] indices) { - vec4 v = vec4(0.0); - int a[4]; - a[0] = indices[0]; - a[1] = indices[1]; - float temp = 0.0; - for(int a2=0; a2<${ti[2]}; a2++) { - a[2] = a2; - for(int a3=0; a3<${ti[3]}; a3++) { - a[3] = a3; - float x = _X(a); - temp += x; - } - } - float mean = temp / float(${ts}); - temp = 0.0; - for(int a2=0; a2<${ti[2]}; a2++) { - a[2] = a2; - for(int a3=0; a3<${ti[3]}; a3++) { - a[3] = a3; - float x = _X(a); - temp += (x - mean) * (x - mean); - } - } - v.r = mean; - v.g = temp / float(${ts}); - - return v; - }`;return Object.assign(Object.assign({},tr),{output:{dims:tu,type:tn.type,textureType:ta.TextureType.packedLastDimension},shaderSource:tl})})(ts,tr)}),tl={name:"InstanceNormalization_ComputeOutput",inputNames:["X","MeanAndVariance","Scale","B"],inputTypes:[ta.TextureType.unpacked,ta.TextureType.packedLastDimension,ta.TextureType.unpacked,ta.TextureType.unpacked]},tc=(tr,tn,ti,ts)=>{let tu=Object.assign(Object.assign({},tl),{cacheHint:`${ti}`});return Object.assign(Object.assign({},tu),{get:()=>((tr,tn,ti,ts,tu)=>{let tl=(0,to.getGlsl)(tr.session.backend.glContext.version),[tc,tp]=tr.calculateTextureWidthAndHeight(tu,ta.TextureType.packedLastDimension),[tf,td]=[tc/4,tp],th=` - vec4 get_MeanAndVariance(int[2] mv) { - int offset = indicesToOffset_MeanAndVariance(mv); - vec2 coords = offsetToCoords(offset, ${tf}, ${td}); - return ${tl.texture2D}(MeanAndVariance, coords); - } - - float process(int[4] indices) { - int mv[2]; - mv[0] = indices[0]; - mv[1] = indices[1]; - vec4 mean_and_variance = get_MeanAndVariance(mv); - float mean = mean_and_variance.r; - float variance = mean_and_variance.g; - - int sb[1]; - sb[0] = indices[1]; - float scale = _Scale(sb); - float b = _B(sb); - - return scale * (_X(indices) - mean) / sqrt(variance + epsilon) + b; - }`;return Object.assign(Object.assign({},tn),{output:{dims:ti.dims,type:ti.type,textureType:ta.TextureType.unpacked},variables:[{name:"epsilon",type:"float",data:ts}],shaderSource:th})})(tr,tu,tn,ti,ts)})},tp=tr=>{if(!tr||3!==tr.length)throw Error("InstanceNormalization requires 3 inputs.");let tn=tr[0],ti=tr[1],to=tr[2];if(tn.dims.length<3||1!==ti.dims.length||1!==to.dims.length)throw Error("Invalid input shape.");if(ti.dims[0]!==tn.dims[1]||to.dims[0]!==tn.dims[1])throw Error("Input shapes are mismatched.");if("float32"!==tn.type&&"float64"!==tn.type||"float32"!==ti.type&&"float64"!==ti.type||"float32"!==to.type&&"float64"!==to.type)throw Error("Invalid input type.");if(4!==tr[0].dims.length)throw Error("Only support 4-D input shape.")}},708:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.createPackedMatmulProgramInfoLoader=void 0;let to=ti(2517),ta=ti(5060),ts=ti(2039),tu=ti(9390),tl=ti(2823),tc=ti(5623);tn.createPackedMatmulProgramInfoLoader=(tr,tn,ti)=>{var tp,tf;let td=(tp=tn.length>2,tf=ti.activationCacheKey,{name:"MatMul (packed)",inputNames:tp?["A","B","Bias"]:["A","B"],inputTypes:tp?[ts.TextureType.packed,ts.TextureType.packed,ts.TextureType.packed]:[ts.TextureType.packed,ts.TextureType.packed],cacheHint:tf});return Object.assign(Object.assign({},td),{get:()=>((tr,tn,ti,tp)=>{let tf=ti.length>2,td=tf?"value += getBiasForMatmul();":"",th=ti[0].dims,tg=ti[1].dims,tb=to.BroadcastUtil.calcShape(th,tg,!0),tm=!to.ShapeUtil.areEqual(ti[0].dims,ti[1].dims);if(!tb)throw Error("Can't use matmul on the given tensors");let ty=th[th.length-1],t_=Math.ceil(ty/2),tv=th.length,tx=tg.length,tw=(0,ta.getGlsl)(tr.session.backend.glContext.version),tT=(0,tu.getCoordsDataType)(tb.length),tS=tb.length,tO=(0,tu.getGlChannels)(),{activationFunction:tA,applyActivation:tE}=(0,tl.getActivationSnippet)(tp),tI=tf?`${(0,tc.getBiasForMatmul)(tT,tO,ti[2].dims,tb,!0)}`:"",tP=tm?`${function(tr,tn,ti,ta){let ts=[],tu=[],tl=ti[0].dims,tc=ti[1].dims,tp=tl.length,tf=tc.length,td=ta.length,th=td-tp,tg=td-tf;(ts=tl.map((tr,ti)=>`coords.${tn[ti+th]}`))[tp-1]="i*2",ts.join(", "),(tu=tc.map((tr,ti)=>`coords.${tn[ti+tg]}`))[tf-2]="i*2",tu.join(", ");let tb=to.BroadcastUtil.getBroadcastDims(tl,ta),tm=to.BroadcastUtil.getBroadcastDims(tc,ta),ty=tb.map(tr=>`coords.${tn[tr+th]} = 0;`).join("\n"),t_=tm.map(tr=>`coords.${tn[tr+tg]} = 0;`).join("\n"),tv=`int lastDim = coords.${tn[td-1]}; - coords.${tn[td-1]} = coords.${tn[td-2]}; - coords.${tn[td-2]} = lastDim;`;return` -vec4 getAAtOutCoordsMatmul(int i) { - ${tr} coords = getOutputCoords(); - ${tv} - ${ty} - vec4 outputValue = getA(${ts}); - return outputValue; -} - -vec4 getBAtOutCoordsMatmul(int i) { - ${tr} coords = getOutputCoords(); - ${tv} - ${t_} - vec4 outputValue = getB(${tu}); - return outputValue; -}`}(tT,tO,ti,tb)}`:"",tD=tm?"getAAtOutCoordsMatmul(i)":`getA(${function(tr,tn){let ti="";for(let to=0;to{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.getBiasForMatmul=tn.createMatmulProgramInfoLoader=tn.parseMatMulAttributes=tn.matMul=void 0;let to=ti(2517),ta=ti(2039),ts=ti(9390),tu=ti(2823),tl=ti(708);function tc(tr,tn){var ti,tl;let tc=(ti=tr.length>2,tl=tn.activationCacheKey,{name:"MatMul",inputNames:ti?["A","B","Bias"]:["A","B"],inputTypes:ti?[ta.TextureType.unpacked,ta.TextureType.unpacked,ta.TextureType.unpacked]:[ta.TextureType.unpacked,ta.TextureType.unpacked],cacheHint:tl});return Object.assign(Object.assign({},tc),{get:()=>(function(tr,tn,ti){let tl=tn[0].dims,tc=tn[1].dims,tp=to.BroadcastUtil.calcShape(tl,tc,!0);if(!tp)throw Error("Can't use matmul on the given tensors");let td=(0,ts.getCoordsDataType)(tp.length),th=(0,ts.getGlChannels)(),{activationFunction:tg,applyActivation:tb}=(0,tu.getActivationSnippet)(ti),tm=tn.length>2,ty=tm?"value += getBiasForMatmul();":"",t_=tm?`${tf(td,th,tn[2].dims,tp,!1)}`:"",tv=tp.length,tx=tl.length,tw=tc.length,tT=` - ${tg} - ${t_} - float process(int indices[${tv}]) { - int a[${tx}]; - int b[${tw}]; - bcastMatmulIndices_A(indices, a); - bcastMatmulIndices_B(indices, b); - - float value; - for (int k=0; k<${tl[tl.length-1]}; ++k) { - a[${tx-1}] = k; - b[${tw-2}] = k; - value += _A(a) * _B(b); - } - ${ty} - ${tb} - return value; - }`;return Object.assign(Object.assign({},tr),{output:{dims:tp,type:tn[0].type,textureType:ta.TextureType.unpacked},shaderSource:tT})})(tc,tr,tn)})}tn.matMul=(tr,tn,ti)=>(tp(tn),tr.session.pack?[tr.run((0,tl.createPackedMatmulProgramInfoLoader)(tr,tn,ti),tn)]:[tr.run(tc(tn,ti),tn)]),tn.parseMatMulAttributes=tr=>(0,tu.parseInternalActivationAttributes)(tr.attributes),tn.createMatmulProgramInfoLoader=tc;let tp=tr=>{if(!tr||2!==tr.length)throw Error("MatMul requires 2 inputs.");if(tr[0].dims[tr[0].dims.length-1]!==tr[1].dims[tr[1].dims.length-2])throw Error("shared dimension does not match.");if("float32"!==tr[0].type&&"float64"!==tr[0].type||"float32"!==tr[1].type&&"float64"!==tr[1].type)throw Error("inputs should be float type");if(tr[0].type!==tr[1].type)throw Error("inputs types should match")};function tf(tr,tn,ti,ta,ts){let tu="",tl=ti.length,tc=ta.length,tp=tc-tl;tu=tc<2&&tl>0?"coords":ti.map((tr,ti)=>`coords.${tn[ti+tp]}`).join(", ");let tf=to.BroadcastUtil.getBroadcastDims(ti,ta).map(tr=>`coords.${tn[tr+tp]} = 0;`).join("\n"),td="vec4(outputValue.xx, outputValue.yy)";return 1===to.ShapeUtil.size(ti)&&(td="vec4(outputValue.x)"),ts?` -vec4 getBiasForMatmul() { - ${tr} coords = getOutputCoords(); - ${tf} - vec4 outputValue = getBias(${tu}); - return ${td}; -}`:` -float getBiasForMatmul() { - ${tr} coords = getOutputCoords(); - ${tf} - return getBias(coords.x); -}`}tn.getBiasForMatmul=tf},2403:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.createPackProgramInfoLoader=void 0;let to=ti(5060),ta=ti(2039),ts=ti(9390),tu=ti(2827),tl={name:"pack",inputNames:["A"],inputTypes:[ta.TextureType.unpackedReversed]};tn.createPackProgramInfoLoader=(tr,tn)=>Object.assign(Object.assign({},tl),{get:()=>((tr,tn)=>{var ti,tc,tp,tf;let td;let th=(0,to.getGlsl)(tr.session.backend.glContext.version),tg=tn.dims,tb=tg.length,tm=tn.dims.length,ty=(0,ts.getCoordsDataType)(tm),t_=(0,tu.getChannels)("rc",tm),tv=(ti=tm,tc=t_,tp=tg[tg.length-2],tf=tg[tg.length-1],0===ti||1===ti?"":` - int r = ${tc[ti-2]}; - int c = ${tc[ti-1]}; - int rp1 = ${tc[ti-2]} + 1; - int cp1 = ${tc[ti-1]} + 1; - bool rEdge = rp1 >= ${tf}; - bool cEdge = cp1 >= ${tp}; - `);td=0===tb?[1,1]:1===tb?[tg[0],1]:[tg[tm-1],tg[tm-2]];let tx=function(tr,tn,ti){if(0===tr)return"false";if(1===tr)return`rc > ${tn[0]}`;let to="";for(let ta=tr-2;ta= ${tn[ta-tr+2]}`,ta= ${tr[0]} ? 0. : getA(rc + 1), - 0, 0`;let to="";if(ti>2)for(let tr=0;tr{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.unpackFromChannel=tn.getChannels=tn.getVecChannels=void 0;let to=ti(9390);function ta(tr,tn){return(0,to.getGlChannels)(tn).map(tn=>`${tr}.${tn}`)}tn.getVecChannels=ta,tn.getChannels=function(tr,tn){return 1===tn?[tr]:ta(tr,tn)},tn.unpackFromChannel=function(){return"\n float getChannel(vec4 frag, int dim) {\n int modCoord = imod(dim, 2);\n return modCoord == 0 ? frag.r : frag.g;\n }\n\n float getChannel(vec4 frag, vec2 innerDims) {\n vec2 modCoord = mod(innerDims, 2.);\n return modCoord.x == 0. ?\n (modCoord.y == 0. ? frag.r : frag.g) :\n (modCoord.y == 0. ? frag.b : frag.a);\n }\n "}},2870:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parsePadAttributesV11=tn.padV11=tn.parsePadAttributesV2=tn.padV2=void 0;let to=ti(246),ta=ti(2517),ts=ti(5060),tu=ti(2039),tl={name:"Pad",inputNames:["A"],inputTypes:[tu.TextureType.unpacked]};tn.padV2=(tr,tn,ti)=>(tf(tn),[tr.run(Object.assign(Object.assign({},tl),{cacheHint:ti.cacheKey,get:()=>tp(tr,tn[0],ti)}),tn)]),tn.parsePadAttributesV2=tr=>{let tn=tr.attributes.getString("mode","constant"),ti=tr.attributes.getFloat("value",0),ta=tr.attributes.getInts("pads");return(0,to.createAttributeWithCacheKey)({mode:tn,value:ti,pads:ta})},tn.padV11=(tr,ti,to)=>{td(ti);let ta=tc(tr,ti,to);return(0,tn.padV2)(tr,[ti[0]],ta)},tn.parsePadAttributesV11=tr=>tr.attributes.getString("mode","constant");let tc=(tr,tn,ti)=>{if(!tr.session.isInitializer(tn[1].dataId)||tn.length>=3&&!tr.session.isInitializer(tn[2].dataId))throw Error("dynamic pad attributes are not allowed");let ta=Array.from(tn[1].integerData),ts=tn.length>=3?tn[2].floatData[0]:0;return(0,to.createAttributeWithCacheKey)({mode:ti,pads:ta,value:ts})},tp=(tr,tn,ti)=>{let to=ta.ShapeUtil.padShape(tn.dims.slice(),ti.pads),ts=to.length,tl=` - ${th(tr,tn,ti)} - float process(int[${ts}] indices) { - return padA(indices); - }`;return{name:"Pad",inputNames:["A"],inputTypes:[tu.TextureType.unpacked],output:{dims:to,type:tn.type,textureType:tu.TextureType.unpacked},shaderSource:tl}},tf=tr=>{if(!tr||1!==tr.length)throw Error("Pad requires 1 input");if("float32"!==tr[0].type&&"float64"!==tr[0].type)throw Error("Invalid input type.")},td=tr=>{if(!tr||2!==tr.length&&3!==tr.length)throw Error("Pad requires 2 or 3 inputs");if("int32"!==tr[1].type||tr.length>=3&&"string"===tr[2].type)throw Error("Invalid input type.")},th=(tr,tn,ti)=>{let to=(0,ts.getGlsl)(tr.session.backend.glContext.version),[tl,tc]=tr.calculateTextureWidthAndHeight(tn.dims,tu.TextureType.unpacked),tp=ta.ShapeUtil.computeStrides(tn.dims);switch(ti.mode){case"constant":return tg(to,tn.dims,tp,tl,tc,ti.pads,ti.value);case"reflect":return tb(to,tn.dims,tp,tl,tc,ti.pads);case"edge":return tm(to,tn.dims,tp,tl,tc,ti.pads);default:throw Error("Invalid mode")}},tg=(tr,tn,ti,to,ta,ts,tu)=>{let tl=tn.length,tc="";for(let tr=tl-1;tr>=0;--tr)tc+=` - k = m[${tr}] - ${ts[tr]}; - if (k < 0) return constant; - if (k >= ${tn[tr]}) return constant; - offset += k * ${ti[tr]}; - `;return` - float padA(int m[${tl}]) { - const float constant = float(${tu}); - int offset = 0; - int k = 0; - ${tc} - vec2 coords = offsetToCoords(offset, ${to}, ${ta}); - float value = getColorAsFloat(${tr.texture2D}(A, coords)); - return value; - } - `},tb=(tr,tn,ti,to,ta,ts)=>{let tu=tn.length,tl="";for(let tr=tu-1;tr>=0;--tr)tl+=` - k = m[${tr}] - ${ts[tr]}; - if (k < 0) { k = -k; } - { - const int _2n_1 = ${2*(tn[tr]-1)}; - k = int( mod( float(k), float(_2n_1) ) ) ; - if(k >= ${tn[tr]}) { k = _2n_1 - k; } - } - offset += k * ${ti[tr]}; - `;return` - float padA(int m[${tu}]) { - int offset = 0; - int k = 0; - ${tl} - vec2 coords = offsetToCoords(offset, ${to}, ${ta}); - float value = getColorAsFloat(${tr.texture2D}(A, coords)); - return value; - } - `},tm=(tr,tn,ti,to,ta,ts)=>{let tu=tn.length,tl="";for(let tr=tu-1;tr>=0;--tr)tl+=` - k = m[${tr}] - ${ts[tr]}; - if (k < 0) k = 0; - if (k >= ${tn[tr]}) k = ${tn[tr]-1}; - offset += k * ${ti[tr]}; - `;return` - float padA(int m[${tu}]) { - int offset = 0; - int k = 0; - ${tl} - vec2 coords = offsetToCoords(offset, ${to}, ${ta}); - float value = getColorAsFloat(${tr.texture2D}(A, coords)); - return value; - } - `}},2143:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.globalMaxPool=tn.parseMaxPoolAttributes=tn.maxPool=tn.parseGlobalAveragePoolAttributes=tn.globalAveragePool=tn.parseAveragePoolAttributes=tn.averagePool=void 0;let to=ti(246),ta=ti(2517),ts=ti(2039);tn.averagePool=(tr,tn,ti)=>{td(tn);let to={name:"AveragePool",inputNames:["X"],inputTypes:[ts.TextureType.unpacked],cacheHint:ti.cacheKey};return[tr.run(Object.assign(Object.assign({},to),{get:()=>tu(tn,to,!1,ti)}),tn)]},tn.parseAveragePoolAttributes=tr=>{let tn=tr.attributes.getString("auto_pad","NOTSET"),ti=tr.attributes.getInt("ceil_mode",0),ta=0!==tr.attributes.getInt("count_include_pad",0),ts=tr.attributes.getInts("kernel_shape"),tu=tr.attributes.getInts("strides",[]),tl=tr.attributes.getInts("pads",[]);if(0!==ti)throw Error("using ceil() in shape computation is not yet supported for AveragePool");return(0,to.createAttributeWithCacheKey)({autoPad:tn,ceilMode:ti,countIncludePad:ta,kernelShape:ts,strides:tu,pads:tl})};let tu=(tr,tn,ti,to)=>{let[tu,tl]=tc(tr,to,ti),tp=ta.ShapeUtil.size(tu.kernelShape),tf="";tu.countIncludePad?tf+=`value /= float(${tp});`:tf+=`value /= float(${tp} - pad);`;let td=` - ${th(tr[0].dims,tu,"value += _X(x);",tf,"0.0")} - `;return Object.assign(Object.assign({},tn),{output:{dims:tl,type:tr[0].type,textureType:ts.TextureType.unpacked},shaderSource:td})};tn.globalAveragePool=(tr,tn,ti)=>{td(tn);let to={name:"GlobalAveragePool",inputNames:["X"],inputTypes:[ts.TextureType.unpacked],cacheHint:`${ti.countIncludePad}`};return[tr.run(Object.assign(Object.assign({},to),{get:()=>tu(tn,to,!0,ti)}),tn)]},tn.parseGlobalAveragePoolAttributes=tr=>{let tn=0!==tr.attributes.getInt("count_include_pad",0);return(0,to.createAttributeWithCacheKey)({autoPad:"",ceilMode:0,countIncludePad:tn,kernelShape:[],strides:[],pads:[]})},tn.maxPool=(tr,tn,ti)=>{td(tn);let to={name:"MaxPool",inputNames:["X"],inputTypes:[ts.TextureType.unpacked],cacheHint:ti.cacheKey};return[tr.run(Object.assign(Object.assign({},to),{get:()=>tl(tn,to,!1,ti)}),tn)]},tn.parseMaxPoolAttributes=tr=>{let tn=tr.attributes.getString("auto_pad","NOTSET"),ti=tr.attributes.getInt("ceil_mode",0),ta=tr.attributes.getInts("kernel_shape"),ts=tr.attributes.getInts("strides",[]),tu=tr.attributes.getInts("pads",[]),tl=tr.attributes.getInt("storage_order",0),tc=tr.attributes.getInts("dilations",[]);if(0!==tl)throw Error("column major storage order is not yet supported for MaxPool");if(0!==ti)throw Error("using ceil() in shape computation is not yet supported for MaxPool");return(0,to.createAttributeWithCacheKey)({autoPad:tn,ceilMode:ti,countIncludePad:!1,kernelShape:ta,strides:ts,pads:tu,storageOrder:tl,dilations:tc})};let tl=(tr,tn,ti,to)=>{let[ta,tu]=tc(tr,to,ti),tl=` - ${th(tr[0].dims,ta,"\n value = max(_X(x), value);\n ","","-1e5")} - `;return Object.assign(Object.assign({},tn),{output:{dims:tu,type:tr[0].type,textureType:ts.TextureType.unpacked},shaderSource:tl})},tc=(tr,tn,ti)=>{let to=tr[0].dims.slice(),ts=Object.hasOwnProperty.call(tn,"dilations"),tu=tn.kernelShape.slice(),tl=tn.strides.slice(),tc=ts?tn.dilations.slice():[],tp=tn.pads.slice();ta.PoolConvUtil.adjustPoolAttributes(ti,to,tu,tl,tc,tp);let tf=ta.PoolConvUtil.computePoolOutputShape(ti,to,tl,tc,tu,tp,tn.autoPad),td=Object.assign({},tn);return ts?Object.assign(td,{kernelShape:tu,strides:tl,pads:tp,dilations:tc,cacheKey:tn.cacheKey}):Object.assign(td,{kernelShape:tu,strides:tl,pads:tp,cacheKey:tn.cacheKey}),[td,tf]},tp={autoPad:"",ceilMode:0,countIncludePad:!1,kernelShape:[],strides:[],pads:[],storageOrder:0,dilations:[],cacheKey:""},tf={name:"GlobalMaxPool",inputNames:["X"],inputTypes:[ts.TextureType.unpacked]};tn.globalMaxPool=(tr,tn)=>(td(tn),[tr.run(Object.assign(Object.assign({},tf),{get:()=>tl(tn,tf,!0,tp)}),tn)]);let td=tr=>{if(!tr||1!==tr.length)throw Error("Pool ops requires 1 input.");if("float32"!==tr[0].type&&"float64"!==tr[0].type)throw Error("Invalid input type.")},th=(tr,tn,ti,to,ts)=>{let tu=tr.length;if(tn.kernelShape.length<=2){let ta=tn.kernelShape[tn.kernelShape.length-1],tl=tn.strides[tn.strides.length-1],tc=tn.pads[tn.pads.length/2-1],tp=tn.pads[tn.pads.length-1],tf=tr[tu-1],td="",th="",tg="";if(td=tc+tp!==0?` - for (int i = 0; i < ${ta}; i++) { - x[${tu} - 1] = indices[${tu} - 1] * ${tl} - ${tc} + i; - if (x[${tu} - 1] < 0 || x[${tu} - 1] >= ${tf}) { - pad++; - continue; - } - ${ti} - }`:` - for (int i = 0; i < ${ta}; i++) { - x[${tu} - 1] = indices[${tu} - 1] * ${tl} - ${tc} + i; - ${ti} - }`,2===tn.kernelShape.length){let ti=tn.kernelShape[tn.kernelShape.length-2],to=tn.strides[tn.strides.length-2],ts=tn.pads[tn.pads.length/2-2],tl=tn.pads[tn.pads.length-2],tc=tr[tu-2];th=ts+tl!==0?` - for (int j = 0; j < ${ti}; j++) { - x[${tu} - 2] = indices[${tu} - 2] * ${to} - ${ts} + j; - if (x[${tu} - 2] < 0 || x[${tu} - 2] >= ${tc}) { - pad+= ${ta}; - continue; - } - `:` - for (int j = 0; j < ${ti}; j++) { - x[${tu} - 2] = indices[${tu} - 2] * ${to} - ${ts} + j; - `,tg="\n }\n "}return` - float process(int indices[${tu}]) { - int x[${tu}]; - copyVec(indices, x); - - float value = ${ts}; - int pad = 0; - ${th} - ${td} - ${tg} - ${to} - return value; - } - `}{let tl=ta.ShapeUtil.size(tn.kernelShape),tc=ta.ShapeUtil.computeStrides(tn.kernelShape),tp=tc.length,tf=tn.pads.length,td=tb(tp),th=tg(tr,"inputDims"),tm=tg(tn.pads,"pads"),ty=tg(tc,"kernelStrides"),t_=tg(tn.strides,"strides"),tv="";return` - ${td} - float process(int indices[${tu}]) { - int x[${tu}]; - copyVec(indices, x); - int offset[${tp}]; - int pads[${tf}]; - int inputDims[${tu}]; - int kernelStrides[${tp}]; - int strides[${tp}]; - ${tm} - ${th} - ${t_} - ${ty} - - float value = ${ts}; - int pad = 0; - bool isPad = false; - for (int i = 0; i < ${tl}; i++) { - offsetToIndices(i, kernelStrides, offset); - isPad = false; - for (int j = ${tu} - ${tp}; j < ${tu}; j++) { - x[j] = indices[j] * strides[j - ${tu} + ${tp}] - + offset[j - ${tu} + ${tp}] - pads[j - 2]; - ${tv=tn.pads.reduce((tr,tn)=>tr+tn)?` - if (x[j] >= inputDims[j] || x[j] < 0) { - pad++; - isPad = true; - break; - } - } - if (!isPad) { - ${ti} - }`:` - } - ${ti} - `} - } - ${to} - - return value; - } - `}},tg=(tr,tn)=>{let ti="";for(let to=0;to` - void offsetToIndices(int offset, int[${tr}] strides, out int[${tr}] indices) { - if (${tr} == 0) { - return; - } - for (int i = 0; i < ${tr} - 1; ++i) { - indices[i] = offset / strides[i]; - offset -= indices[i] * strides[i]; - } - indices[${tr} - 1] = offset; - }`},4939:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.reduceLogSumSquare=tn.reduceLogSum=tn.reduceProd=tn.reduceMin=tn.reduceMax=tn.reduceMean=tn.reduceSum=tn.parseReduceAttributes=void 0;let to=ti(246),ta=ti(782),ts=ti(2517),tu=ti(2039),tl=(tr,tn,ti,to,ta)=>{tp(tn);let ts={name:to,inputNames:["A"],inputTypes:[tu.TextureType.unpacked]};return[tr.run(Object.assign(Object.assign({},ts),{cacheHint:ti.cacheKey,get:()=>tc(tr,tn,ti,to,ta,ts)}),tn)]};tn.parseReduceAttributes=tr=>{let tn=tr.attributes.getInts("axes",[]),ti=1===tr.attributes.getInt("keepdims",1);return(0,to.createAttributeWithCacheKey)({axes:tn,keepDims:ti})};let tc=(tr,tn,ti,to,ta,tl)=>{let tc=[],tp=tn[0].dims.length||1,tf=[],td=ts.ShapeUtil.normalizeAxes(ti.axes,tn[0].dims.length),th=ta(tn,td),tg=th[1];for(let tr=0;tr=0||0===td.length?(ti.keepDims&&tc.push(1),tg=` - for(int j${tr} = 0; j${tr} < ${tn[0].dims[tr]}; j${tr}++) { - inputIdx[${tr}] = j${tr}; - ${tg} - }`):(tf.push(`inputIdx[${tr}] = outputIdx[${tc.length}];`),tc.push(tn[0].dims[tr]));let tb=` - float process(int outputIdx[${tc.length||1}]) { - float value; // final result - int inputIdx[${tp}]; // addressing input data - ${tf.join("\n")} - ${th[0]} // init ops for reduce max/min - ${tg} - ${th[2]} // final computation for reduce mean - return value; - }`;return Object.assign(Object.assign({},tl),{output:{dims:tc,type:tn[0].type,textureType:tu.TextureType.unpacked},shaderSource:tb})},tp=tr=>{if(!tr||1!==tr.length)throw Error("Reduce op requires 1 input.");if(-1===ta.NUMBER_TYPES.indexOf(tr[0].type))throw Error("Invalid input type.")};tn.reduceSum=(tr,tn,ti)=>tl(tr,tn,ti,"ReduceSum",()=>["value = 0.0;","value += _A(inputIdx);",""]),tn.reduceMean=(tr,tn,ti)=>tl(tr,tn,ti,"ReduceMean",(tr,tn)=>{let ti=1;for(let to=0;to=0||0===tn.length)&&(ti*=tr[0].dims[to]);return["value = 0.0;","value += _A(inputIdx);",`value /= ${ti}.;`]}),tn.reduceMax=(tr,tn,ti)=>tl(tr,tn,ti,"ReduceMax",(tr,tn)=>{let ti=[];for(let to=0;to=0||0===tn.length)&&ti.push(`inputIdx[${to}] = 0;`);return[`${ti.join("\n")} -value = _A(inputIdx);`,"value = max(value, _A(inputIdx));",""]}),tn.reduceMin=(tr,tn,ti)=>tl(tr,tn,ti,"ReduceMin",(tr,tn)=>{let ti=[];for(let to=0;to=0||0===tn.length)&&ti.push(`inputIdx[${to}] = 0;`);return[`${ti.join("\n")} -value = _A(inputIdx);`,"value = min(value, _A(inputIdx));",""]}),tn.reduceProd=(tr,tn,ti)=>tl(tr,tn,ti,"ReduceProd",()=>["value = 1.0;","value *= _A(inputIdx);",""]),tn.reduceLogSum=(tr,tn,ti)=>tl(tr,tn,ti,"ReduceLogSum",()=>["value = 0.0;","value += _A(inputIdx);","value = log(value);"]),tn.reduceLogSumSquare=(tr,tn,ti)=>tl(tr,tn,ti,"ReduceLogSumSquare",()=>["float t; value = 0.0;","t = _A(inputIdx); value += t * t;",""])},7019:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.isReshapeCheap=tn.processDims3D=tn.createPackedReshape3DProgramInfoLoader=void 0;let to=ti(2517),ta=ti(5060),ts=ti(2039),tu=ti(2827);tn.createPackedReshape3DProgramInfoLoader=(tr,tn,ti)=>{var tl;let tc=(tl=ti,{name:"Reshape (packed)",inputTypes:[ts.TextureType.packed],inputNames:["A"],cacheHint:`${tl}`});return Object.assign(Object.assign({},tc),{get:()=>((tr,tn,ti,tl)=>{let tc=tn.dims,tp=tl,tf="";for(let tr=0;tr<4;tr++){let tn="";switch(tr){case 0:tn="outputCoords = rc;";break;case 1:tn="outputCoords = ivec3(rc.x, rc.y+1, rc.z);";break;case 2:tn="outputCoords = ivec3(rc.x, rc.y, rc.z+1);";break;case 3:tn="outputCoords = ivec3(rc.x, rc.y+1, rc.z+1);";break;default:throw Error()}tf+=` - ${tn} - ${tr>0?"if(outputCoords.y < rows && outputCoords.z < cols){":""} - int flattenedIndex = getFlattenedIndex(outputCoords); - - ivec3 inputRC = inputCoordsFromReshapedOutCoords(flattenedIndex); - vec2 innerDims = vec2(float(inputRC.y),float(inputRC.z)); - - result[${tr}] = getChannel(getA(inputRC.x, inputRC.y, inputRC.z), innerDims); - - ${tr>0?"}":""} - `}let td=(0,ta.getGlsl)(tr.session.backend.glContext.version),th=` - ${function(tr){let tn=to.ShapeUtil.computeStrides(tr),ti=["b","r","c"],ta="index";return` - ivec3 inputCoordsFromReshapedOutCoords(int index) { - ${tn.map((tr,to)=>`int ${ti[to]} = ${ta} / ${tr}; ${to===tn.length-1?`int ${ti[to+1]} = ${ta} - ${ti[to]} * ${tr}`:`index -= ${ti[to]} * ${tr}`};`).join("")} - return ivec3(b, r, c); - } - `}(tc)} - ${function(tr){let tn=to.ShapeUtil.computeStrides(tr);return` - int getFlattenedIndex(ivec3 coords) { - // reverse y, z order - return coords.x * ${tn[0]} + coords.z * ${tn[1]} + coords.y; - } -`}(tp)} - ${(0,tu.unpackFromChannel)()} - - void main() { - ivec3 rc = getOutputCoords(); - - vec4 result = vec4(0.0); - - ivec3 outputCoords; - int rows = ${tp[2]}; - int cols = ${tp[1]}; - - ${tf} - ${td.output} = result; - } - `;return Object.assign(Object.assign({},ti),{output:{dims:tp,type:tn.type,textureType:ts.TextureType.packed},shaderSource:th,hasMain:!0})})(tr,tn,tc,ti)})},tn.processDims3D=function(tr){if(0===tr.length)return[1,1,1];let tn=1;for(let ti=0;ti1?tr[tr.length-2]:1,tr[tr.length-1]]},tn.isReshapeCheap=function(tr,tn){return 0===tr.length||0===tn.length||(tr.length<2||tn.length<2?tr[tr.length-1]===tn[tn.length-1]:tr[tr.length-1]===tn[tn.length-1]&&tr[tr.length-2]===tn[tn.length-2])}},718:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.reshape=void 0;let to=ti(2517);tn.reshape=(tr,tn)=>{let ti=to.ShapeUtil.calculateReshapedDims(tn[0].dims,tn[1].integerData);return tr.session.pack?[tr.reshapePacked(tn[0],ti)]:[tr.reshapeUnpacked(tn[0],ti)]}},2268:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseResizeAttributesV11=tn.parseResizeAttributesV10=tn.resize=void 0;let to=ti(5060),ta=ti(2039),ts=ti(9390),tu=ti(2827),tl=ti(9793),tc={name:"Resize",inputNames:["A"],inputTypes:[ta.TextureType.packed]};tn.resize=(tr,tn,ti)=>((0,tl.validateInputs)(tn,ti),[tr.run(Object.assign(Object.assign({},tc),{cacheHint:ti.cacheKey,get:()=>tp(tr,tn,ti)}),tn)]),tn.parseResizeAttributesV10=tr=>(0,tl.parseUpsampleAttributes)(tr,10),tn.parseResizeAttributesV11=tr=>(0,tl.parseUpsampleAttributes)(tr,11);let tp=(tr,tn,ti)=>{let tl=(0,to.getGlsl)(tr.session.backend.glContext.version),[tp,td]=tf(tn,ti);if(tp.every(tr=>1===tr)&&"tf_crop_and_resize"!==ti.coordinateTransformMode)return Object.assign(Object.assign({},tc),{output:{dims:td,type:tn[0].type,textureType:ta.TextureType.packed},hasMain:!0,shaderSource:`void main() { - vec4 v = ${tl.texture2D}(X, TexCoords); - ${tl.output} = v; - }`});let th=td.length;if(th<2)throw Error(`output dimension should be at least 2, but got ${th}`);let tg=td[th-2],tb=td[th-1],tm=tn[0].dims;if(th!==tm.length)throw Error(`output dimension should match input ${tm.length}, but got ${th}`);let ty=tm[th-2],t_=tm[th-1],tv=tp[th-2],tx=tp[th-1],tw="";if("linear"!==ti.mode)throw Error(`resize (packed) does not support mode: '${ti.mode}'`);switch(ti.coordinateTransformMode){case"asymmetric":tw="\n vec4 getSourceFracIndex(ivec4 coords) {\n return vec4(coords) / scaleWHWH;\n }\n ";break;case"half_pixel":tw="\n vec4 getSourceFracIndex(ivec4 coords) {\n return (vec4(coords) + 0.5) / scaleWHWH - 0.5;\n }\n ";break;case"pytorch_half_pixel":tw=` - vec4 getSourceFracIndex(ivec4 coords) { - vec4 fcoords = vec4(coords); - return vec4( - ${tb}.0 > 1.0 ? (fcoords.x + 0.5) / scaleWHWH.x - 0.5 : 0.0, - ${tg}.0 > 1.0 ? (fcoords.y + 0.5) / scaleWHWH.y - 0.5 : 0.0, - ${tb}.0 > 1.0 ? (fcoords.z + 0.5) / scaleWHWH.z - 0.5 : 0.0, - ${tg}.0 > 1.0 ? (fcoords.w + 0.5) / scaleWHWH.w - 0.5 : 0.0 - ); - } - `;break;case"align_corners":tw=` - vec4 getSourceFracIndex(ivec4 coords) { - vec4 resized = vec4(${tb}.0 - 1.0, ${tg}.0 - 1.0, ${tb}.0 - 1.0, - ${tg}.0 - 1.0); - vec4 original = vec4(${t_}.0 - 1.0, ${ty}.0 - 1.0, ${t_}.0 - 1.0, - ${ty}.0 - 1.0); - vec4 new_scale = original / resized; - return vec4(coords) * new_scale; - } - `;break;default:throw Error(`resize (packed) does not support coordinateTransformMode: '${ti.coordinateTransformMode}'`)}let tT=(0,ts.getCoordsDataType)(th),tS=` - const vec2 inputWH = vec2(${ty}.0, ${t_}.0); - const vec4 scaleWHWH = vec4(float(${tv}), float(${tx}), float(${tv}), float(${tx})); - ${(0,tu.unpackFromChannel)()} - ${tw} - float getAValue(int x10, int r, int c, int d) { - return getChannel(getA(x10, r, c, d), vec2(c, d)); - } - void main() { - ${tT} rc = getOutputCoords(); - - int batch = rc[0]; - int depth = rc[1]; - - // retrieve the 4 coordinates that is used in the 4 packed output values. - ivec4 coords = ivec4(rc.wz, rc.w + 1, rc.z + 1); - - // calculate the source index in fraction - vec4 sourceFrac = getSourceFracIndex(coords); - - // get the lower and upper bound of the 4 values that will be packed into one texel. - ivec4 x00 = ivec4(max(sourceFrac.xy, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.xy))); - ivec4 x01 = ivec4(max(sourceFrac.xw, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.xw))); - ivec4 x10 = ivec4(max(sourceFrac.zy, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.zy))); - ivec4 x11 = ivec4(max(sourceFrac.zw, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.zw))); - - bool hasNextRow = rc.w < ${tg-1}; - bool hasNextCol = rc.z < ${tb-1}; - - // pack x00, x01, x10, x11's top-left corner into one vec4 structure - vec4 topLeft = vec4( - getAValue(batch, depth, x00.x, x00.y), - hasNextCol ? getAValue(batch, depth, x01.x, x01.y) : 0.0, - hasNextRow ? getAValue(batch, depth, x10.x, x10.y) : 0.0, - (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.x, x11.y) : 0.0); - - // pack x00, x01, x10, x11's top-right corner into one vec4 structure - vec4 topRight = vec4( - getAValue(batch, depth, x00.x, x00.w), - hasNextCol ? getAValue(batch, depth, x01.x, x01.w) : 0.0, - hasNextRow ? getAValue(batch, depth, x10.x, x10.w) : 0.0, - (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.x, x11.w) : 0.0); - - // pack x00, x01, x10, x11's bottom-left corner into one vec4 structure - vec4 bottomLeft = vec4( - getAValue(batch, depth, x00.z, x00.y), - hasNextCol ? getAValue(batch, depth, x01.z, x01.y) : 0.0, - hasNextRow ? getAValue(batch, depth, x10.z, x10.y) : 0.0, - (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.z, x11.y) : 0.0); - - // pack x00, x01, x10, x11's bottom-right corner into one vec4 structure - vec4 bottomRight = vec4( - getAValue(batch, depth, x00.z, x00.w), - hasNextCol ? getAValue(batch, depth, x01.z, x01.w) : 0.0, - hasNextRow ? getAValue(batch, depth, x10.z, x10.w) : 0.0, - (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.z, x11.w) : 0.0); - - // calculate the interpolation fraction on u and v direction - vec4 frac = vec4(sourceFrac) - floor(sourceFrac); - vec4 clampFrac = clamp(frac, vec4(0.0), vec4(1.0)); - - vec4 top = mix(topLeft, topRight, clampFrac.ywyw); - vec4 bottom = mix(bottomLeft, bottomRight, clampFrac.ywyw); - vec4 newValue = mix(top, bottom, clampFrac.xxzz); - - ${tl.output} = vec4(newValue); - } - `;return Object.assign(Object.assign({},tc),{output:{dims:td,type:tn[0].type,textureType:ta.TextureType.packed},hasMain:!0,shaderSource:tS})},tf=(tr,tn)=>{let ti=tr[0].dims,to,ta=tn.scales;if(0===ta.length){let ts=tr[tn.scalesInputIdx];if(ts&&0!==ts.size){if(tr[tn.sizesInputIdx])throw Error("Only one of scales or sizes must be provided as input.");ta=td(ts,tn.mode,tn.isResize)}else{let ts=tr[tn.sizesInputIdx];if(!ts||0===ts.size)throw Error("Either scales or sizes MUST be provided as input.");ta=th(to=Array.from(ts.integerData),ti,tn.mode,tn.isResize)}}else if(tr[tn.sizesInputIdx])throw Error("Only one of scales or sizes must be provided as input.");let ts=to||ti.map((tr,tn)=>Math.floor(tr*ta[tn]));return[ta,ts]},td=(tr,tn,ti)=>{let to=Array.from(tr.floatData);return(0,tl.scalesValidation)(to,tn,ti),to},th=(tr,tn,ti,to)=>{let ta=tn.length,ts=Array(ta);for(let ti=0,to=ta;ti{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.shape=void 0;let to=ti(9162);tn.shape=(tr,tn)=>(ta(tn),[new to.Tensor([tn[0].dims.length],"int32",void 0,void 0,new Int32Array(tn[0].dims))]);let ta=tr=>{if(!tr||1!==tr.length)throw Error("Shape requires 1 input.")}},2278:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.sliceV10=tn.parseSliceAttributes=tn.slice=void 0;let to=ti(246),ta=ti(782),ts=ti(2517),tu=ti(2039),tl={name:"Slice",inputNames:["A"],inputTypes:[tu.TextureType.unpacked]};tn.slice=(tr,tn,ti)=>(tp(tn),[tr.run(Object.assign(Object.assign({},tl),{cacheHint:ti.cacheKey,get:()=>tc(tr,tn[0],ti)}),tn)]),tn.parseSliceAttributes=tr=>{let tn=tr.attributes.getInts("starts"),ti=tr.attributes.getInts("ends"),ta=tr.attributes.getInts("axes",[]);return(0,to.createAttributeWithCacheKey)({starts:tn,ends:ti,axes:ta})};let tc=(tr,tn,ti)=>{let to=0===ti.axes.length?tn.dims.slice(0).map((tr,tn)=>tn):ti.axes,ta=ts.ShapeUtil.normalizeAxes(to,tn.dims.length),tc=ti.starts.map((tr,ti)=>tr>tn.dims[ta[ti]]-1?tn.dims[ta[ti]]:ts.ShapeUtil.normalizeAxis(tr,tn.dims[ta[ti]])),tp=ti.ends.map((tr,ti)=>tr>tn.dims[ta[ti]]-1?tn.dims[ta[ti]]:ts.ShapeUtil.normalizeAxis(tr,tn.dims[ta[ti]])),tf=tn.dims.slice(),td=[];for(let tr=0;tr0&&td.push(`outputIdx[${ta[tr]}] += ${tc[tr]};`);let th=` - float process(int outputIdx[${tf.length}]) { - ${td.join("\n ")} - return _A(outputIdx); - }`;return Object.assign(Object.assign({},tl),{output:{dims:tf,type:tn.type,textureType:tu.TextureType.unpacked},shaderSource:th})},tp=tr=>{if(!tr||1!==tr.length)throw Error("Slice requires 1 input.");if(-1===ta.NUMBER_TYPES.indexOf(tr[0].type))throw Error("Invalid input type.")};tn.sliceV10=(tr,tn)=>{td(tn);let ti=tf(tr,tn);return[tr.run(Object.assign(Object.assign({},tl),{cacheHint:ti.cacheKey,get:()=>tc(tr,tn[0],ti)}),[tn[0]])]};let tf=(tr,tn)=>{if(!tr.session.isInitializer(tn[1].dataId)||!tr.session.isInitializer(tn[2].dataId)||tn.length>=4&&!tr.session.isInitializer(tn[3].dataId)||tn.length>=5&&!tr.session.isInitializer(tn[4].dataId))throw Error("dynamic slice attributes are not allowed");if(tn.length>=5&&tn[4].integerData.some(tr=>1!==tr))throw Error("currently non-1 steps is not supported for Slice");let ti=Array.from(tn[1].integerData),to=Array.from(tn[2].integerData),ta=tn.length>=4?Array.from(tn[3].integerData):[];return{starts:ti,ends:to,axes:ta,cacheKey:`${ta};${ti};${to}`}},td=tr=>{if(!tr||tr.length<3||tr.length>5)throw Error("Invalid input number.");if("int32"!==tr[1].type||1!==tr[1].dims.length||"int32"!==tr[2].type||1!==tr[2].dims.length||tr.length>=4&&("int32"!==tr[3].type||1!==tr[3].dims.length)||tr.length>=5&&("int32"!==tr[4].type||1!==tr[4].dims.length))throw Error("Invalid input type.")}},5524:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.softmaxV13=tn.parseSoftmaxAttributesV13=tn.parseSoftmaxAttributes=tn.softmax=void 0;let to=ti(246),ta=ti(2517),ts=ti(5060),tu=ti(2039),tl=ti(3738),tc={name:"SoftmaxComputeMax",inputNames:["A"],inputTypes:[tu.TextureType.unpacked]},tp={name:"SoftmaxComputeScale",inputNames:["A","Max"],inputTypes:[tu.TextureType.unpacked,tu.TextureType.unpacked]},tf={name:"SoftMax",inputNames:["A","Max","Norm"],inputTypes:[tu.TextureType.unpacked,tu.TextureType.unpacked,tu.TextureType.unpacked]};tn.softmax=(tr,tn,ti)=>{tm(tn);let to=tn[0].dims.slice(),ts=ta.ShapeUtil.normalizeAxis(ti.axis,to.length),tu=ta.ShapeUtil.sizeToDimension(to,ts),tl=ta.ShapeUtil.sizeFromDimension(to,ts);return td(tr,tn,ti,tu,tl)},tn.parseSoftmaxAttributes=tr=>(0,to.createAttributeWithCacheKey)({axis:tr.attributes.getInt("axis",1)}),tn.parseSoftmaxAttributesV13=tr=>(0,to.createAttributeWithCacheKey)({axis:tr.attributes.getInt("axis",-1)}),tn.softmaxV13=(tr,tn,ti)=>{tm(tn);let ts=tn[0].dims.slice(),tu=ta.ShapeUtil.normalizeAxis(ti.axis,ts.length),tc=ts.length,tp=tu!==tc-1,tf=[],th,tg=[],tb=[];tp&&((tg=Array.from({length:tc}).map((tr,tn)=>tn))[tu]=tc-1,tg[tc-1]=tu,tg.map(tr=>tf.push(ts[tr])),th=(0,to.createAttributeWithCacheKey)({perm:tg}),tb=(0,tl.transpose)(tr,tn,th));let ty=tp?ta.ShapeUtil.sizeToDimension(tf,tc-1):ta.ShapeUtil.sizeToDimension(ts,tc-1),t_=tp?ta.ShapeUtil.sizeFromDimension(tf,tc-1):ta.ShapeUtil.sizeFromDimension(ts,tc-1),tv=td(tr,tp?tb:tn,ti,ty,t_);return tp?(0,tl.transpose)(tr,tv,th):tv};let td=(tr,tn,ti,to,ta)=>{let ts=th(tr,tn[0],to,ta,[to]),tu=tr.run(Object.assign(Object.assign({},tc),{cacheHint:ti.cacheKey,get:()=>ts}),tn),tl=tg(tr,tn[0],to,ta,ts.output.dims,[to]),td=tr.run(Object.assign(Object.assign({},tp),{cacheHint:ti.cacheKey,get:()=>tl}),[tn[0],tu]),tm=tb(tr,tn[0],to,ta,ts.output.dims,tl.output.dims);return[tr.run(Object.assign(Object.assign({},tf),{cacheHint:ti.cacheKey,get:()=>tm}),[tn[0],tu,td])]},th=(tr,tn,ti,to,ta)=>{let[tl,tp]=tr.calculateTextureWidthAndHeight(tn.dims,tu.TextureType.unpacked),tf=ta.length;if(ti<1||to<1)throw Error("Logical row count N and feature count D must be greater than or equal to 1");if(1!==ta.length)throw Error("Dimensionality of the output should be 1");if(ta[0]!==ti)throw Error("Shape of the output should be equal to logical row count");let td=(0,ts.getGlsl)(tr.session.backend.glContext.version),th=` - float process(int[${tf}] indices) { - int logical_row_start_offset = indices[0] * ${to}; - - float max = getColorAsFloat(${td.texture2D}(A, offsetToCoords(logical_row_start_offset, ${tl}, - ${tp} ))); - for(int i=1; i<${to}; ++i) - { - float current = getColorAsFloat(${td.texture2D}(A, offsetToCoords(logical_row_start_offset + i, - ${tl}, ${tp}))); - if(current > max) - max = current; - } - - return max; - }`;return Object.assign(Object.assign({},tc),{output:{dims:ta,type:tn.type,textureType:tu.TextureType.unpacked},shaderSource:th})},tg=(tr,tn,ti,to,ta,tl)=>{let[tc,tf]=tr.calculateTextureWidthAndHeight(tn.dims,tu.TextureType.unpacked),td=tl.length;if(ti<1||to<1)throw Error("Logical row count N and feature count D must be greater than or equal to 1");if(1!==tl.length)throw Error("Dimensionality of the output should be 1");if(tl[0]!==ti)throw Error("Shape of the output should be equal to logical row count");if(1!==ta.length)throw Error("Dimensionality of the intermediate results should be 1");if(ta[0]!==ti)throw Error("Shape of the intermediate results should be equal to logical row count");let th=` - float process(int[${td}] indices) { - int logical_row_start_offset = indices[0] * ${to}; - - float norm_factor = 0.0; - float max = _Max(indices); - for(int i=0; i<${to}; ++i) - { - norm_factor += exp(getColorAsFloat(${(0,ts.getGlsl)(tr.session.backend.glContext.version).texture2D}(A, offsetToCoords(logical_row_start_offset + i, - ${tc}, ${tf}))) - max); - } - - return norm_factor; - }`;return Object.assign(Object.assign({},tp),{output:{dims:tl,type:tn.type,textureType:tu.TextureType.unpacked},shaderSource:th})},tb=(tr,tn,ti,to,ta,ts)=>{let[tl,tc]=tr.calculateTextureWidthAndHeight(tn.dims,tu.TextureType.unpacked),tp=tn.dims.length;if(ti<1||to<1)throw Error("Logical row count N and feature count D must be greater than or equal to 1");if(1!==ta.length||1!==ts.length)throw Error("Dimensionality of the intermediate results should be 1");if(ta[0]!==ti||ts[0]!==ti)throw Error("Shape of the intermediate results should be equal to logical row count");let td=` - float process(int[${tp}] indices) { - - // get offset of current logical tensor index from the 2-D texture coordinates (TexCoords) - int offset = coordsToOffset(TexCoords, ${tl}, ${tc}); - - //determine the logical row for this index - int logical_row_index[1]; - logical_row_index[0] = offset / ${to}; - - float norm_factor = _Norm(logical_row_index); - - // avoid possible division by 0 - // if norm_facor is 0, all elements are zero - // if so, return 0 - if(norm_factor == 0.0) - return 0.0; - - return exp(_A(indices) - _Max(logical_row_index)) / norm_factor; - }`;return Object.assign(Object.assign({},tf),{output:{dims:tn.dims,type:tn.type,textureType:tu.TextureType.unpacked},shaderSource:td})},tm=tr=>{if(!tr||1!==tr.length)throw Error("Softmax requires 1 input.");if("float32"!==tr[0].type&&"float64"!==tr[0].type)throw Error("Invalid input type")}},5975:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseSplitAttributes=tn.split=void 0;let to=ti(246),ta=ti(2517),ts=ti(2039),tu={name:"Split",inputNames:["A"],inputTypes:[ts.TextureType.unpacked]};tn.split=(tr,tn,ti)=>{tp(tn);let to=ta.ShapeUtil.normalizeAxis(ti.axis,tn[0].dims.length),ts=tl(tr,tn,to,ti),tf=[];for(let ta=0;tatc(tr,tn[0],ti,to,ta)}),tn));return tf},tn.parseSplitAttributes=tr=>{let tn=tr.attributes.getInt("axis",0),ti=tr.attributes.getInts("split",[]),ta=tr.outputs.length;return(0,to.createAttributeWithCacheKey)({axis:tn,split:ti,numOutputs:ta})};let tl=(tr,tn,ti,to)=>{let[,ts]=ta.SplitUtil.splitShape(tn[0].dims,ti,to.split,to.numOutputs);return ts.length},tc=(tr,tn,ti,to,tl)=>{let[tc,tp]=ta.SplitUtil.splitShape(tn.dims,to,ti.split,ti.numOutputs),tf=tp[tl],td=tc[tl],th=` - float process(int indices[${td.length}]) { - indices[${to}] += ${tf}; - return _A(indices); - } - `;return Object.assign(Object.assign({},tu),{cacheHint:`${ti.cacheKey}:${tl}`,output:{dims:td,type:tn.type,textureType:ts.TextureType.unpacked},shaderSource:th})},tp=tr=>{if(!tr||1!==tr.length)throw Error("Split requires one input.");if("int8"!==tr[0].type&&"uint8"!==tr[0].type&&"int16"!==tr[0].type&&"uint16"!==tr[0].type&&"int32"!==tr[0].type&&"uint32"!==tr[0].type&&"float32"!==tr[0].type&&"float64"!==tr[0].type&&"bool"!==tr[0].type)throw Error("Invalid input type.")}},3933:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseSqueezeAttributes=tn.squeezeV13=tn.squeeze=void 0;let to=ti(2517);tn.squeeze=(tr,tn,ti)=>{ta(tn);let ts=to.ShapeUtil.squeezeShape(tn[0].dims,ti);return[tr.reshapeUnpacked(tn[0],ts)]},tn.squeezeV13=(tr,ti)=>(ts(ti),(0,tn.squeeze)(tr,[ti[0]],Array.from(ti[1].integerData))),tn.parseSqueezeAttributes=tr=>tr.attributes.getInts("axes");let ta=tr=>{if(!tr||1!==tr.length)throw Error("Squeeze requires 1 input.");if("string"===tr[0].type)throw Error("invalid input tensor types.")},ts=tr=>{if(!tr||2!==tr.length)throw Error("Squeeze requires 2 inputs.");if("int32"!==tr[1].type)throw Error("Invalid input type.")}},6558:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.sum=void 0;let to=ti(5060),ta=ti(2039);tn.sum=(tr,tn)=>{tu(tn);let ti={name:"Sum",inputNames:tn.map((tr,tn)=>`X${tn}`),inputTypes:Array(tn.length).fill(ta.TextureType.unpacked)};return[tr.run(Object.assign(Object.assign({},ti),{get:()=>ts(tr,tn,ti)}),tn)]};let ts=(tr,tn,ti)=>{let ts=(0,to.getGlsl)(tr.session.backend.glContext.version),tu=tn[0].dims.slice(),tl=` - void main() { - vec4 result = ${tn.map((tr,tn)=>`${ts.texture2D}(X${tn},TexCoords)`).join(" + ")}; - ${ts.output} = result; - } - `;return Object.assign(Object.assign({},ti),{output:{dims:tu,type:tn[0].type,textureType:ta.TextureType.unpacked},hasMain:!0,shaderSource:tl})},tu=tr=>{if(!tr||0===tr.length)throw Error("Sum requires inputs.");let tn=tr[0].dims.length;for(let ti=1;ti{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.tile=void 0;let to=ti(782),ta=ti(2039);tn.tile=(tr,tn)=>{tu(tn);let ti={name:"Tile",inputNames:["A"],inputTypes:[ta.TextureType.unpacked]};return[tr.run(Object.assign(Object.assign({},ti),{get:()=>ts(tr,tn,ti)}),tn)]};let ts=(tr,tn,ti)=>{let to=tn[0].dims.slice(),ts=Array(to.length),tu=[];for(let tr=0;tr{if(!tr||2!==tr.length)throw Error("Tile requires 2 input.");if(1!==tr[1].dims.length)throw Error("The second input shape must 1 dimension.");if(tr[1].dims[0]!==tr[0].dims.length)throw Error("Invalid input shape.");if(-1===to.NUMBER_TYPES.indexOf(tr[0].type))throw Error("Invalid input type.");if("int32"!==tr[1].type&&"int16"!==tr[1].type)throw Error("Invalid repeat type.")}},3738:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseTransposeAttributes=tn.transpose=void 0;let to=ti(246),ta=ti(2517),ts=ti(2039),tu={name:"Transpose",inputNames:["A"],inputTypes:[ts.TextureType.unpacked]};tn.transpose=(tr,tn,ti)=>(td(tn),[tr.run(Object.assign(Object.assign({},tu),{cacheHint:ti.cacheKey,get:()=>tl(tr,tn[0],ti.perm)}),tn)]),tn.parseTransposeAttributes=tr=>(0,to.createAttributeWithCacheKey)({perm:tr.attributes.getInts("perm",[])});let tl=(tr,tn,ti)=>{let to=tn.dims;ti=tc(to,ti);let ta=tp(to,ti),tl=to.length,td=` - ${tf("perm",ti,tl)} - float process(int indices[${tl}]) { - int a[${tl}]; - perm(a, indices); - return _A(a); - }`;return Object.assign(Object.assign({},tu),{output:{dims:ta,type:tn.type,textureType:ts.TextureType.unpacked},shaderSource:td})},tc=(tr,tn)=>(tn&&tn.length!==tr.length&&(tn=[...tr.keys()].reverse()),tn),tp=(tr,tn)=>(tn=tc(tr,tn),ta.ShapeUtil.sortBasedOnPerm(tr,tn)),tf=(tr,tn,ti)=>{let to=[];to.push(`void ${tr}(out int a[${ti}], int src[${ti}]) {`);for(let tr=0;tr{if(!tr||1!==tr.length)throw Error("Transpose requires 1 input.");if("float32"!==tr[0].type&&"float64"!==tr[0].type)throw Error("input should be float tensor")}},8710:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.encodeAsUint8=void 0;let to=ti(5060),ta=ti(2039);tn.encodeAsUint8=(tr,tn)=>{let ti=tn.shape,ts=(0,to.getGlsl)(tr.session.backend.glContext.version),tu=` - const float FLOAT_MAX = 1.70141184e38; - const float FLOAT_MIN = 1.17549435e-38; - - bool isNaN(float val) { - return (val < 1.0 || 0.0 < val || val == 0.0) ? false : true; - } - - highp vec4 encodeAsUint8(highp float v) { - if (isNaN(v)) { - return vec4(255, 255, 255, 255); - } - - highp float av = abs(v); - - if(av < FLOAT_MIN) { - return vec4(0.0, 0.0, 0.0, 0.0); - } else if(v > FLOAT_MAX) { - return vec4(0.0, 0.0, 128.0, 127.0) / 255.0; - } else if(v < -FLOAT_MAX) { - return vec4(0.0, 0.0, 128.0, 255.0) / 255.0; - } - - highp vec4 c = vec4(0,0,0,0); - - highp float e = floor(log2(av)); - highp float m = exp2(fract(log2(av))) - 1.0; - - c[2] = floor(128.0 * m); - m -= c[2] / 128.0; - c[1] = floor(32768.0 * m); - m -= c[1] / 32768.0; - c[0] = floor(8388608.0 * m); - - highp float ebias = e + 127.0; - c[3] = floor(ebias / 2.0); - ebias -= c[3] * 2.0; - c[2] += floor(ebias) * 128.0; - - c[3] += 128.0 * step(0.0, -v); - - return c / 255.0; - } - - void main() { - float value = ${ts.texture2D}(X,TexCoords).r; - ${ts.output} = encodeAsUint8(value); - }`,tl={name:"Uint8Encode",inputTypes:[ta.TextureType.unpacked],inputNames:["X"],output:{dims:ti,type:tn.tensor.type,textureType:ta.TextureType.downloadUint8AsFloat},shaderSource:tu,hasMain:!0};return tr.executeProgram(tl,[tn.tensor])}},4909:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.tanh=tn.tan=tn.sqrt=tn.sin=tn.sigmoid=tn.relu=tn.not=tn.neg=tn.log=tn.parseLeakyReluAttributes=tn.leakyRelu=tn.identity=tn.floor=tn.exp=tn.parseEluAttributes=tn.elu=tn.cos=tn.ceil=tn.clipV11=tn.parseClipAttributes=tn.clip=tn.atan=tn.asin=tn.acos=tn.abs=tn.glslTanh=tn.glslTan=tn.glslSqrt=tn.glslSigmoid=tn.glslRelu=tn.glslSin=tn.glslNot=tn.glslNeg=tn.glslLog=tn.glslLeakyRelu=tn.glslIdentity=tn.glslClip=tn.glslFloor=tn.glslExp=tn.glslElu=tn.glslCos=tn.glslCeil=tn.glslAtan=tn.glslAsin=tn.glslAcos=tn.glslAbs=void 0;let to=ti(246),ta=ti(2517),ts=ti(8520),tu=ti(5060),tl=ti(2039);function tc(){return t$("abs")}function tp(){return t$("acos")}function tf(){return t$("asin")}function td(){return t$("atan")}function th(){return t$("ceil")}function tg(){return t$("cos")}function tb(tr){let tn="elu";return{body:` - const float alpha = float(${tr}); - - float ${tn}_(float a) { - return a >= 0.0 ? a: (exp(a) - 1.0) * alpha; - } - vec4 ${tn}_(vec4 v) { - return vec4(${tn}_(v.x), ${tn}_(v.y), ${tn}_(v.z), ${tn}_(v.w)); - } - `,name:tn,type:ts.FunctionType.ValueBased}}function tm(){return t$("exp")}function ty(){return t$("floor")}function t_(tr,tn){let ti="clip";return{body:` - const float min = float(${tr}); - const float max = float(${tn}); - - float ${ti}_(float a) { - return clamp(a, min, max); - } - vec4 ${ti}_(vec4 v) { - return clamp(v, min, max); - } - `,name:ti,type:ts.FunctionType.ValueBased}}function tv(){let tr="indentity";return{body:` - float ${tr}_(float a) { - return a; - } - vec4 ${tr}_(vec4 v) { - return v; - } - `,name:tr,type:ts.FunctionType.ValueBased}}function tx(tr){let tn="leakyRelu";return{body:` - const float alpha = float(${tr}); - - float ${tn}_(float a) { - return a < 0.0 ? a * alpha : a; - } - vec4 ${tn}_(vec4 v) { - return vec4(${tn}_(v.x), ${tn}_(v.y), ${tn}_(v.z), ${tn}_(v.w)); - } - `,name:tn,type:ts.FunctionType.ValueBased}}function tw(){return t$("log")}function tT(){let tr="neg";return{body:` - float ${tr}_(float a) { - return -a; - } - vec4 ${tr}_(vec4 v) { - return -v; - } - `,name:tr,type:ts.FunctionType.ValueBased}}function tS(){let tr="not";return{body:` - float ${tr}_(float a) { - return float( ! bool(a) ); - } - bool ${tr}_(bool a) { - return !a; - } - vec4 ${tr}_(vec4 v) { - return vec4(!bool(v.x), !bool(v.y), !bool(v.z), !bool(v.w)); - } - bvec4 ${tr}_(bvec4 v) { - return bvec4(!v.x, !v.y, !v.z, !v.w); - } - `,name:tr,type:ts.FunctionType.ValueBased}}function tO(){return t$("sin")}function tA(){let tr="relu";return{body:` - float ${tr}_(float a) { - return max( a, 0.0 ); - } - vec4 ${tr}_(vec4 v) { - return max( v, 0.0 ); - } - `,name:tr,type:ts.FunctionType.ValueBased}}function tE(){let tr="sigmoid";return{body:` - float ${tr}_(float a) { - return 1.0 / (1.0 + exp(-a)); - } - vec4 ${tr}_(vec4 v) { - return 1.0 / (1.0 + exp(-v)); - } - `,name:tr,type:ts.FunctionType.ValueBased}}function tI(){return t$("sqrt")}function tP(){return t$("tan")}function tD(){let tr="tanh";return{body:` - float ${tr}_(float a) { - a = clamp(a, -10., 10.); - a = exp(2.*a); - return (a - 1.) / (a + 1.); - } - vec4 ${tr}_(vec4 v) { - v = clamp(v, -10., 10.); - v = exp(2.*v); - return (v - 1.) / (v + 1.); - } - `,name:tr,type:ts.FunctionType.ValueBased}}function t$(tr){return{body:` - float ${tr}_(float a) { - return ${tr}(a); - } - vec4 ${tr}_(vec4 v) { - return ${tr}(v); - } - `,name:tr,type:ts.FunctionType.ValueBased}}tn.glslAbs=tc,tn.glslAcos=tp,tn.glslAsin=tf,tn.glslAtan=td,tn.glslCeil=th,tn.glslCos=tg,tn.glslElu=tb,tn.glslExp=tm,tn.glslFloor=ty,tn.glslClip=t_,tn.glslIdentity=tv,tn.glslLeakyRelu=tx,tn.glslLog=tw,tn.glslNeg=tT,tn.glslNot=tS,tn.glslSin=tO,tn.glslRelu=tA,tn.glslSigmoid=tE,tn.glslSqrt=tI,tn.glslTan=tP,tn.glslTanh=tD;let tk=(tr,tn,ti,to)=>{let ta=tr.session.pack?tl.TextureType.packed:tl.TextureType.unpacked,ts={name:ti.name,inputTypes:[ta],inputNames:["A"],cacheHint:to};return Object.assign(Object.assign({},ts),{get:()=>((tr,tn,ti,to)=>{let ta=tr.session.pack?tl.TextureType.packed:tl.TextureType.unpacked,ts=(0,tu.getGlsl)(tr.session.backend.glContext.version);return Object.assign(Object.assign({},tn),{output:{dims:ti.dims,type:ti.type,textureType:ta},shaderSource:` - ${to.body} - void main() { - vec4 v = ${ts.texture2D}(A, TexCoords); - v = ${to.name}_(v); - ${ts.output} = v; - } - `,hasMain:!0})})(tr,ts,tn,ti)})};tn.abs=(tr,tn)=>[tr.run(tk(tr,tn[0],tc()),tn)],tn.acos=(tr,tn)=>[tr.run(tk(tr,tn[0],tp()),tn)],tn.asin=(tr,tn)=>[tr.run(tk(tr,tn[0],tf()),tn)],tn.atan=(tr,tn)=>[tr.run(tk(tr,tn[0],td()),tn)],tn.clip=(tr,tn,ti)=>[tr.run(tk(tr,tn[0],t_(ti.min,ti.max),ti.cacheKey),tn)],tn.parseClipAttributes=tr=>(0,to.createAttributeWithCacheKey)({min:tr.attributes.getFloat("min",ta.MIN_CLIP),max:tr.attributes.getFloat("max",ta.MAX_CLIP)}),tn.clipV11=(tr,ti)=>{let to=tC(tr,ti);return(0,tn.clip)(tr,[ti[0]],to)};let tC=(tr,tn)=>{if(tn.length>=3&&(!tr.session.isInitializer(tn[1].dataId)||!tr.session.isInitializer(tn[2].dataId)))throw Error("dynamic clip attributes are not allowed");let ti=tn.length>=3?tn[1].numberData[0]:ta.MIN_CLIP,ts=tn.length>=3?tn[2].numberData[0]:ta.MAX_CLIP;return(0,to.createAttributeWithCacheKey)({min:ti,max:ts})};tn.ceil=(tr,tn)=>[tr.run(tk(tr,tn[0],th()),tn)],tn.cos=(tr,tn)=>[tr.run(tk(tr,tn[0],tg()),tn)],tn.elu=(tr,tn,ti)=>[tr.run(tk(tr,tn[0],tb(ti.alpha),ti.cacheKey),tn)],tn.parseEluAttributes=tr=>(0,to.createAttributeWithCacheKey)({alpha:tr.attributes.getFloat("alpha",1)}),tn.exp=(tr,tn)=>[tr.run(tk(tr,tn[0],tm()),tn)],tn.floor=(tr,tn)=>[tr.run(tk(tr,tn[0],ty()),tn)],tn.identity=(tr,tn)=>[tr.run(tk(tr,tn[0],tv()),tn)],tn.leakyRelu=(tr,tn,ti)=>[tr.run(tk(tr,tn[0],tx(ti.alpha),ti.cacheKey),tn)],tn.parseLeakyReluAttributes=tr=>(0,to.createAttributeWithCacheKey)({alpha:tr.attributes.getFloat("alpha",.01)}),tn.log=(tr,tn)=>[tr.run(tk(tr,tn[0],tw()),tn)],tn.neg=(tr,tn)=>[tr.run(tk(tr,tn[0],tT()),tn)],tn.not=(tr,tn)=>[tr.run(tk(tr,tn[0],tS()),tn)],tn.relu=(tr,tn)=>[tr.run(tk(tr,tn[0],tA()),tn)],tn.sigmoid=(tr,tn)=>[tr.run(tk(tr,tn[0],tE()),tn)],tn.sin=(tr,tn)=>[tr.run(tk(tr,tn[0],tO()),tn)],tn.sqrt=(tr,tn)=>[tr.run(tk(tr,tn[0],tI()),tn)],tn.tan=(tr,tn)=>[tr.run(tk(tr,tn[0],tP()),tn)],tn.tanh=(tr,tn)=>[tr.run(tk(tr,tn[0],tD()),tn)]},5611:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.createUnpackProgramInfoLoader=tn.createUnpackProgramInfo=void 0;let to=ti(5060),ta=ti(2039),ts=ti(9390),tu=ti(2827),tl={name:"unpack",inputNames:["A"],inputTypes:[ta.TextureType.packed]};tn.createUnpackProgramInfo=(tr,tn)=>{let ti=tn.dims.length,tc=(0,tu.getChannels)("rc",ti),tp=tc.slice(-2),tf=(0,ts.getCoordsDataType)(ti),td=(0,tu.unpackFromChannel)(),th=0===tn.dims.length?"":function(tr,tn){if(1===tr)return"rc";let ti="";for(let to=0;toObject.assign(Object.assign({},tl),{get:()=>(0,tn.createUnpackProgramInfo)(tr,ti)})},8428:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.parseUnsqueezeAttributes=tn.unsqueezeV13=tn.unsqueeze=void 0;let to=ti(2517);tn.unsqueeze=(tr,tn,ti)=>{ta(tn);let ts=to.ShapeUtil.unsqueezeShape(tn[0].dims,ti);return[tr.reshapeUnpacked(tn[0],ts)]},tn.unsqueezeV13=(tr,ti)=>(ts(ti),(0,tn.unsqueeze)(tr,[ti[0]],Array.from(ti[1].integerData))),tn.parseUnsqueezeAttributes=tr=>tr.attributes.getInts("axes");let ta=tr=>{if(!tr||1!==tr.length)throw Error("Unsqueeze requires 1 input.");if("string"===tr[0].type)throw Error("invalid input tensor types.")},ts=tr=>{if(!tr||2!==tr.length)throw Error("Unsqueeze requires 2 inputs.");if("int32"!==tr[1].type)throw Error("Invalid input type.")}},9793:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.scalesValidation=tn.validateInputs=tn.parseUpsampleAttributes=tn.parseUpsampleAttributesV9=tn.parseUpsampleAttributesV7=tn.upsample=void 0;let to=ti(246),ta=ti(5060),ts=ti(2039),tu={name:"Upsample",inputNames:["X"],inputTypes:[ts.TextureType.unpacked]};tn.upsample=(tr,ti,to)=>((0,tn.validateInputs)(ti,to),[tr.run(Object.assign(Object.assign({},tu),{cacheHint:to.cacheKey,get:()=>tl(tr,ti,to)}),ti)]),tn.parseUpsampleAttributesV7=tr=>(0,tn.parseUpsampleAttributes)(tr,7),tn.parseUpsampleAttributesV9=tr=>(0,tn.parseUpsampleAttributes)(tr,9),tn.parseUpsampleAttributes=(tr,ti)=>{let ta=ti>=10,ts=tr.attributes.getString("mode","nearest");if("nearest"!==ts&&"linear"!==ts&&(ti<11||"cubic"!==ts))throw Error(`unrecognized mode: ${ts}`);let tu=[];ti<9&&(tu=tr.attributes.getFloats("scales"),(0,tn.scalesValidation)(tu,ts,ta));let tl=tr.attributes.getFloat("extrapolation_value",0),tc=ti>10?tr.attributes.getString("coordinate_transformation_mode","half_pixel"):"asymmetric";if(-1===["asymmetric","pytorch_half_pixel","tf_half_pixel_for_nn","align_corners","tf_crop_and_resize","half_pixel"].indexOf(tc))throw Error(`coordinate_transform_mode '${tc}' is not supported`);let tp="tf_crop_and_resize"===tc,tf=tp,td="nearest"===ts&&ti>=11?tr.attributes.getString("nearest_mode","round_prefer_floor"):"";if(-1===["round_prefer_floor","round_prefer_ceil","floor","ceil",""].indexOf(td))throw Error(`nearest_mode '${td}' is not supported`);let th=tr.attributes.getFloat("cubic_coeff_a",-.75),tg=0!==tr.attributes.getInt("exclude_outside",0);if(tg&&"cubic"!==ts)throw Error("exclude_outside can be set to 1 only when mode is CUBIC.");let tb=ti<11||"nearest"===ts&&"asymmetric"===tc&&"floor"===td,tm=0,ty=0,t_=0;return ti>10?tr.inputs.length>2?(tm=1,ty=2,t_=3):(ty=1,t_=2):9===ti&&(ty=1),(0,to.createAttributeWithCacheKey)({opset:ti,isResize:ta,mode:ts,scales:tu,extrapolationValue:tl,coordinateTransformMode:tc,useExtrapolation:tf,needRoiInput:tp,nearestMode:td,cubicCoefficientA:th,excludeOutside:tg,useNearest2xOptimization:tb,roiInputIdx:tm,scalesInputIdx:ty,sizesInputIdx:t_})};let tl=(tr,tn,ti)=>{let to=(0,ta.getGlsl)(tr.session.backend.glContext.version),[tl,tc]=tr.calculateTextureWidthAndHeight(tn[0].dims,ts.TextureType.unpacked),tp=tn[0].dims.map((tr,tn)=>Math.floor(tr*ti.scales[tn])),[tf,td]=tr.calculateTextureWidthAndHeight(tp,ts.TextureType.unpacked),th=tp.length,tg=Array(th),tb=Array(th),tm=` - int output_pitches[${th}]; - int input_pitches[${th}]; - `;for(let tr=th-1;tr>=0;tr--)tg[tr]=tr===th-1?1:tg[tr+1]*tp[tr+1],tb[tr]=tr===th-1?1:tb[tr+1]*tn[0].dims[tr+1],tm+=` - output_pitches[${tr}] = ${tg[tr]}; - input_pitches[${tr}] = ${tb[tr]}; - `;let ty=` - float getInputFloat(int index) { - vec2 coords = offsetToCoords(index, ${tl}, ${tc}); - float value = getColorAsFloat(${to.texture2D}(X, coords)); - return value; - } - `,t_="nearest"===ti.mode?` - ${ty} - float process(int indices[${th}]) { - int input_index = 0; - int output_index = coordsToOffset(TexCoords, ${tf}, ${td}); - - ${tm} - - int d, m; - for (int dim = 0; dim < ${th}; ++dim) { - d = output_index / output_pitches[dim]; - m = output_index - d * output_pitches[dim]; - output_index = m; - - if (scales[dim] != 1 && d > 0) { - int d2 = d / scales[dim]; - m = d - d2 * scales[dim]; - d = d2; - } - input_index += input_pitches[dim] * d; - } - - return getInputFloat(input_index); - }`:4===th?` - ${ty} - float process(int indices[4]) { - int input_index = 0; - int output_index = coordsToOffset(TexCoords, ${tf}, ${td}); - - ${tm} - - int m; - int index_of_dim0, index_of_dim1, index_of_dim2, index_of_dim3; - index_of_dim0 = output_index / output_pitches[0]; - m = output_index - index_of_dim0 * output_pitches[0]; - index_of_dim1 = m / output_pitches[1]; - m = m - index_of_dim1 * output_pitches[1]; - index_of_dim2 = m / output_pitches[2]; - m = m - index_of_dim2 * output_pitches[2]; - index_of_dim3 = m; - - int index_of_input_dim2, index_of_input_dim3, x_offset, y_offset; - index_of_input_dim2 = index_of_dim2 / scales[2]; - y_offset = index_of_dim2 - index_of_input_dim2 * scales[2]; - index_of_input_dim3 = index_of_dim3 / scales[3]; - x_offset = index_of_dim3 - index_of_input_dim3 * scales[3]; - - input_index = index_of_dim0 * input_pitches[0] + - index_of_dim1 * input_pitches[1] + - index_of_input_dim2 * input_pitches[2] + - index_of_input_dim3; - - float x00 = getInputFloat(input_index); - float x10, x01, x11; - - bool end_of_dim2 = false; - if (index_of_input_dim2 == (${tn[0].dims[2]} - 1)) { - // It's the end in dimension 2 - x01 = x00; - end_of_dim2 = true; - } else { - x01 = getInputFloat(input_index + input_pitches[2]); - } - - if (index_of_input_dim3 == (input_pitches[2] - 1)) { - // It's the end in dimension 3 - x10 = x00; - x11 = x01; - } - else { - x10 = getInputFloat(input_index + 1); - x11 = end_of_dim2 ? x10 : getInputFloat(input_index + input_pitches[2] + 1); - } - - float y0 = x00 + float(y_offset) * (x01 - x00) / float(scales[2]); - float y1 = x10 + float(y_offset) * (x11 - x10) / float(scales[2]); - return y0 + float(x_offset) * (y1 - y0) / float(scales[3]); - }`:` - ${ty} - float process(int indices[2]) { - int input_index = 0; - int output_index = coordsToOffset(TexCoords, ${tf}, ${td}); - - ${tm} - - int m; - int index_of_dim0, index_of_dim1; - index_of_dim0 = output_index / output_pitches[0]; - m = output_index - index_of_dim0 * output_pitches[0]; - index_of_dim1 = m; - - int index_of_input_dim0, index_of_input_dim1, x_offset, y_offset; - index_of_input_dim0 = index_of_dim0 / scales[0]; - y_offset = index_of_dim0 - index_of_input_dim0 * scales[0]; - index_of_input_dim1 = index_of_dim1 / scales[1]; - x_offset = index_of_dim1 - index_of_input_dim1 * scales[1]; - - input_index = index_of_input_dim0 * input_pitches[0] + index_of_input_dim1; - - float x00 = getInputFloat(input_index); - float x10, x01, x11; - - bool end_of_dim0 = false; - if (index_of_input_dim0 == (${tn[0].dims[0]} - 1)) { - // It's the end in dimension 0 - x01 = x00; - end_of_dim0 = true; - } else { - x01 = getInputFloat(input_index + input_pitches[0]); - } - - if (index_of_input_dim1 == (input_pitches[0] - 1)) { - // It's the end in dimension 1 - x10 = x00; - x11 = x01; - } - else { - x10 = getInputFloat(input_index + 1); - x11 = end_of_dim0 ? x10 : getInputFloat(input_index + input_pitches[0] + 1); - } - - float y0 = x00 + float(y_offset) * (x01 - x00) / float(scales[0]); - float y1 = x10 + float(y_offset) * (x11 - x10) / float(scales[0]); - return y0 + float(x_offset) * (y1 - y0) / float(scales[1]); - }`;return Object.assign(Object.assign({},tu),{output:{dims:tp,type:tn[0].type,textureType:ts.TextureType.unpacked},shaderSource:t_,variables:[{name:"scales",type:"int",arrayLength:ti.scales.length,data:ti.scales.map(tr=>Math.ceil(tr))}]})};tn.validateInputs=(tr,tn)=>{if(!tr||tn.opset<9&&1!==tr.length||tn.opset>=9&&tn.opset<11&&2!==tr.length||tn.opset>=11&&tr.length<2)throw Error("invalid inputs.");if(tn.scales.length>0&&tr[0].dims.length!==tn.scales.length)throw Error("Invalid input shape.");if("string"===tr[0].type)throw Error("Invalid input tensor types.")},tn.scalesValidation=(tr,tn,ti)=>{if(ti){for(let tn of tr)if(tn<=0)throw Error("Scale value should be greater than 0.")}else for(let tn of tr)if(tn<1)throw Error("Scale value should be greater than or equal to 1.");if(!("linear"!==tn&&"cubic"!==tn||2===tr.length||4===tr.length&&1===tr[0]&&1===tr[1]))throw Error(`'Linear' mode and 'Cubic' mode only support 2-D inputs ('Bilinear', 'Bicubic') or 4-D inputs with the corresponding outermost 2 scale values being 1 in the ${ti?"Resize":"Upsample"} opeartor.`)}},1958:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.ProgramManager=void 0;let to=ti(1670),ta=ti(6231),ts=ti(8879),tu=ti(5060);tn.ProgramManager=class{constructor(tr,tn,ti){this.profiler=tr,this.glContext=tn,this.textureLayoutStrategy=ti,this.repo=new Map,this.attributesBound=!1}getArtifact(tr){return this.repo.get(tr)}setArtifact(tr,tn){this.repo.set(tr,tn)}run(tr,tn,ti){var to;this.profiler.event("op",`ProgramManager.run ${null!==(to=tr.programInfo.name)&&void 0!==to?to:"unknown kernel"}`,()=>{var to;let ts=this.glContext.gl,tu=tr.program;ts.useProgram(tu);try{this.bindOutput(ti),this.attributesBound||this.bindAttributes(tr.attribLocations),this.bindUniforms(tr.uniformLocations,null!==(to=tr.programInfo.variables)&&void 0!==to?to:[],tn)}catch(tn){throw ta.Logger.error("ProgramManager",tr.programInfo.shaderSource),tn}this.profiler.event("backend","GlContext.draw()",()=>{this.glContext.draw()})},this.glContext)}dispose(){this.vertexShader&&this.glContext.deleteShader(this.vertexShader),this.repo.forEach(tr=>this.glContext.deleteProgram(tr.program))}build(tr,tn,ti){return this.profiler.event("backend","ProgramManager.build",()=>{let to=new ts.GlslPreprocessor(this.glContext,tr,tn,ti),ta=to.preprocess(),tu=this.compile(ta);return{programInfo:tr,program:tu,uniformLocations:this.getUniformLocations(tu,to.context.programInfo.inputNames,to.context.programInfo.variables),attribLocations:this.getAttribLocations(tu)}})}compile(tr){if(!this.vertexShader){ta.Logger.verbose("ProrgramManager","Compiling and caching Vertex shader for the first time");let tr=(0,tu.getVertexShaderSource)(this.glContext.version);this.vertexShader=this.glContext.compileShader(tr,this.glContext.gl.VERTEX_SHADER)}to.env.debug&&ta.Logger.verbose("ProrgramManager",`FragShader: -${tr} -`);let tn=this.glContext.compileShader(tr,this.glContext.gl.FRAGMENT_SHADER),ti=this.glContext.createProgram(this.vertexShader,tn);return this.glContext.deleteShader(tn),ti}bindOutput(tr){let tn=tr.width,ti=tr.height;ta.Logger.verbose("ProrgramManager",`Binding output texture to Framebuffer: w/h=${tn}/${ti}, shape=${tr.shape}, type=${tr.tensor.type}`),this.glContext.attachFramebuffer(tr.texture,tn,ti)}bindAttributes(tr){let tn=tr.position,ti=tr.textureCoord;this.glContext.setVertexAttributes(tn,ti),this.attributesBound=!0}bindUniforms(tr,tn,ti){var to;let ta=this.glContext.gl,ts=0;for(let{name:tu,type:tl,location:tc,arrayLength:tp}of tr){let tr=null===(to=tn.find(tr=>tr.name===tu))||void 0===to?void 0:to.data;if("sampler2D"!==tl&&!tr)throw Error(`variable '${tu}' does not have data defined in program info`);switch(tl){case"sampler2D":this.bindTexture(ti[ts],tc,ts),ts++;break;case"float":tp?ta.uniform1fv(tc,tr):ta.uniform1f(tc,tr);break;case"int":tp?ta.uniform1iv(tc,tr):ta.uniform1i(tc,tr);break;default:throw Error(`Uniform not implemented: ${tl}`)}}}bindTexture(tr,tn,ti){this.glContext.bindTextureToUniform(tr.texture,ti,tn)}getAttribLocations(tr){return{position:this.getAttribLocation(tr,"position"),textureCoord:this.getAttribLocation(tr,"textureCoord")}}getUniformLocations(tr,tn,ti){let to=[];if(tn)for(let ti of tn)to.push({name:ti,type:"sampler2D",location:this.getUniformLocation(tr,ti)});if(ti)for(let tn of ti)to.push(Object.assign(Object.assign({},tn),{location:this.getUniformLocation(tr,tn.name)}));return to}getUniformLocation(tr,tn){let ti=this.glContext.gl.getUniformLocation(tr,tn);if(null===ti)throw Error(`Uniform ${tn} not found.`);return ti}getAttribLocation(tr,tn){return this.glContext.gl.getAttribLocation(tr,tn)}}},6416:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.WebGLSessionHandler=void 0;let to=ti(6231),ta=ti(1047),ts=ti(8316),tu=ti(1640),tl=ti(1958),tc=ti(7859),tp=ti(5702);tn.WebGLSessionHandler=class{constructor(tr,tn){this.backend=tr,this.context=tn,this.layoutStrategy=new tc.PreferLogicalStrategy(tr.glContext.maxTextureSize),this.programManager=new tl.ProgramManager(this.context.profiler,tr.glContext,this.layoutStrategy),this.textureManager=new tp.TextureManager(tr.glContext,this.layoutStrategy,this.context.profiler,{reuseTextures:"full"===tr.textureCacheMode}),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache=new Map,this.pack=tr.pack,this.pack2unpackMap=new Map,this.unpack2packMap=new Map}createInferenceHandler(){return new ts.WebGLInferenceHandler(this)}onGraphInitialized(tr){let tn=tr.getValues().filter(tr=>-1===tr.from&&tr.tensor).map(tr=>tr.tensor.dataId);this.initializers=new Set(tn)}isInitializer(tr){return!!this.initializers&&this.initializers.has(tr)}addInitializer(tr){this.initializers.add(tr)}getTextureData(tr,tn){return tn?this.packedTextureDataCache.get(tr):this.unpackedTextureDataCache.get(tr)}setTextureData(tr,tn,ti=!1){to.Logger.verbose("WebGLSessionHandler","Storing Texture data in cache"),ti?this.packedTextureDataCache.set(tr,tn):this.unpackedTextureDataCache.set(tr,tn)}dispose(){this.programManager.dispose(),this.textureManager.clearActiveTextures(),this.packedTextureDataCache.forEach(tr=>this.textureManager.releaseTexture(tr,!0)),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache.forEach(tr=>this.textureManager.releaseTexture(tr,!0)),this.unpackedTextureDataCache=new Map}resolve(tr,tn,ti){let to=(0,ta.resolveOperator)(tr,tn,tu.WEBGL_OP_RESOLVE_RULES);return{impl:to.opImpl,context:to.opInit?to.opInit(tr,ti):tr}}}},7769:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.Uint8DataEncoder=tn.RGBAFloatDataEncoder=tn.RedFloat32DataEncoder=void 0;let to=ti(6231);tn.RedFloat32DataEncoder=class{constructor(tr,tn=1){if(1===tn)this.internalFormat=tr.R32F,this.format=tr.RED,this.textureType=tr.FLOAT,this.channelSize=tn;else{if(4!==tn)throw Error(`Invalid number of channels: ${tn}`);this.internalFormat=tr.RGBA32F,this.format=tr.RGBA,this.textureType=tr.FLOAT,this.channelSize=tn}}encode(tr,tn){let ti,ta;return tr.constructor!==Float32Array&&(to.Logger.warning("Encoder","data was not of type Float32; creating new Float32Array"),ta=new Float32Array(tr)),tn*this.channelSize>tr.length?(to.Logger.warning("Encoder","Source data too small. Allocating larger array"),ta=tr,ti=this.allocate(tn*this.channelSize),ta.forEach((tr,tn)=>ti[tn]=tr)):ti=ta=tr,ti}allocate(tr){return new Float32Array(4*tr)}decode(tr,tn){return 1===this.channelSize?tr.filter((tr,tn)=>tn%4==0).subarray(0,tn):tr.subarray(0,tn)}},tn.RGBAFloatDataEncoder=class{constructor(tr,tn=1,ti){if(1!==tn&&4!==tn)throw Error(`Invalid number of channels: ${tn}`);this.internalFormat=tr.RGBA,this.format=tr.RGBA,this.channelSize=tn,this.textureType=ti||tr.FLOAT}encode(tr,tn){let ti=tr;return 1===this.channelSize&&(to.Logger.verbose("Encoder","Exploding into a larger array"),ti=this.allocate(tn),tr.forEach((tr,tn)=>ti[4*tn]=tr)),ti}allocate(tr){return new Float32Array(4*tr)}decode(tr,tn){return 1===this.channelSize?tr.filter((tr,tn)=>tn%4==0).subarray(0,tn):tr.subarray(0,tn)}},tn.Uint8DataEncoder=class{constructor(tr,tn=1){if(this.channelSize=4,1===tn)this.internalFormat=tr.ALPHA,this.format=tr.ALPHA,this.textureType=tr.UNSIGNED_BYTE,this.channelSize=tn;else{if(4!==tn)throw Error(`Invalid number of channels: ${tn}`);this.internalFormat=tr.RGBA,this.format=tr.RGBA,this.textureType=tr.UNSIGNED_BYTE,this.channelSize=tn}}encode(tr,tn){return new Uint8Array(tr.buffer,tr.byteOffset,tr.byteLength)}allocate(tr){return new Uint8Array(tr*this.channelSize)}decode(tr,tn){if(tr instanceof Uint8Array)return tr.subarray(0,tn);throw Error(`Invalid array type: ${tr.constructor}`)}}},7859:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.getBatchDim=tn.sizeToSquarishShape=tn.getRowsCols=tn.sizeFromShape=tn.isInt=tn.parseAxisParam=tn.squeezeShape=tn.PreferLogicalStrategy=tn.AlwaysKeepOriginalSizeStrategy=void 0;let to=ti(6231),ta=ti(2517);function ts(tr,tn){let ti=[],to=[],ta=null!=tn&&Array.isArray(tn)&&0===tn.length,ts=null==tn||ta?null:tu(tn,tr).sort(),tl=0;for(let tn=0;tntn)&&1===tr[tn]&&(ti.push(tr[tn]),to.push(tn)),ts[tl]<=tn&&tl++}1!==tr[tn]&&(ti.push(tr[tn]),to.push(tn))}return{newShape:ti,keptDims:to}}function tu(tr,tn){let ti=tn.length;return tr=null==tr?tn.map((tr,tn)=>tn):[].concat(tr),(0,ta.assert)(tr.every(tr=>tr>=-ti&&tr`All values in axis param must be in range [-${ti}, ${ti}) but got axis ${tr}`),(0,ta.assert)(tr.every(tl),()=>`All values in axis param must be integers but got axis ${tr}`),tr.map(tr=>tr<0?ti+tr:tr)}function tl(tr){return tr%1==0}function tc(tr){if(0===tr.length)return 1;let tn=tr[0];for(let ti=1;ti=tr.length?1:tr.slice(tn.breakAxis).reduce((tr,tn)=>tr*tn),ts=tn.breakAxis<=0?1:tr.slice(0,tn.breakAxis).reduce((tr,tn)=>tr*tn);if(!(ta>ti||ts>ti))return[ta,ts];to.Logger.verbose("TextureLayout",`Given width/height preferences were unattainable: shape:${tr}, breakAxis:${tn.breakAxis}`)}let ta=tr.reduce((tr,tn)=>tr*tn),ts=Math.floor(Math.sqrt(ta));for(;ts=ti||ta%ts!=0)throw Error(`The given dimensions are outside this GPU's boundaries: ${tr}`);return[ts,ta/ts]}},tn.PreferLogicalStrategy=class{constructor(tr){this.maxTextureSize=tr}computeTextureWH(tr,tn){let ti=this.computeTexture(tr,tn);return tn&&tn.isPacked&&(ti[0]/=2,ti[1]/=2),tn&&tn.reverseWH?[ti[1],ti[0]]:ti}computeTexture(tr,tn){let ti=tn&&tn.isPacked;if(0===tr.length)return ti?[2,2]:[1,1];let ta=this.maxTextureSize;if(tn&&void 0!==tn.breakAxis){let ti=tn.breakAxis>=tr.length?1:tr.slice(tn.breakAxis).reduce((tr,tn)=>tr*tn),ts=tn.breakAxis<=0?1:tr.slice(0,tn.breakAxis).reduce((tr,tn)=>tr*tn);if(!(ti>ta||ts>ta))return[ti,ts];to.Logger.verbose("TextureLayout",`Given width/height preferences were unattainable: shape:${tr}, breakAxis:${tn.breakAxis}`)}let tu=tr.slice(0);if(ti&&(ta*=2,1===(tu=tu.map((tr,tn)=>tn>=tu.length-2?tu[tn]%2==0?tu[tn]:tu[tn]+1:tu[tn])).length&&(tu=[2,tu[0]])),2!==tu.length){let tr=ts(tu);tu=tr.newShape}let tl=tc(tu);return tu.length<=1&&tl<=ta?[1,tl]:2===tu.length&&tu[0]<=ta&&tu[1]<=ta?tu:3===tu.length&&tu[0]*tu[1]<=ta&&tu[2]<=ta?[tu[0]*tu[1],tu[2]]:3===tu.length&&tu[0]<=ta&&tu[1]*tu[2]<=ta?[tu[0],tu[1]*tu[2]]:4===tu.length&&tu[0]*tu[1]*tu[2]<=ta&&tu[3]<=ta?[tu[0]*tu[1]*tu[2],tu[3]]:4===tu.length&&tu[0]<=ta&&tu[1]*tu[2]*tu[3]<=ta?[tu[0],tu[1]*tu[2]*tu[3]]:ti?tp(tl/4).map(tr=>2*tr):tp(tl)}},tn.squeezeShape=ts,tn.parseAxisParam=tu,tn.isInt=tl,tn.sizeFromShape=tc,tn.getRowsCols=function(tr){if(0===tr.length)throw Error("Cannot get rows and columns of an empty shape array.");return[tr.length>1?tr[tr.length-2]:1,tr[tr.length-1]]},tn.sizeToSquarishShape=tp,tn.getBatchDim=function(tr,tn=2){return tc(tr.slice(0,tr.length-tn))}},4057:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.createTextureLayoutFromShape=tn.calculateTextureWidthAndHeight=tn.createTextureLayoutFromTextureType=void 0;let to=ti(2517),ta=ti(2039);tn.createTextureLayoutFromTextureType=(tr,ti,to)=>{let ts=to===ta.TextureType.unpacked||to===ta.TextureType.unpackedReversed?1:4,tu=to===ta.TextureType.packed,tl=to===ta.TextureType.unpackedReversed||to===ta.TextureType.packed,tc=to===ta.TextureType.packedLastDimension?ti.length-1:void 0,tp=to===ta.TextureType.packedLastDimension?ti.map((tr,tn)=>tn===ti.length-1?4*tr:tr):void 0;return(0,tn.createTextureLayoutFromShape)(tr,ti,ts,tp,{isPacked:tu,reverseWH:tl,breakAxis:tc})},tn.calculateTextureWidthAndHeight=(tr,ti,to)=>{let ta=(0,tn.createTextureLayoutFromTextureType)(tr,ti,to);return[ta.width,ta.height]},tn.createTextureLayoutFromShape=(tr,tn,ti=1,ta,ts)=>{let tu=!(!ts||!ts.isPacked),[tl,tc]=tr.computeTextureWH(tu&&ta||tn,ts),tp=tn.length,tf=tn.slice(0);if(0===tp&&(tf=[1]),1===ti)ta=tn;else if(tu){if(4!==ti)throw Error("a packed texture must be 4-channel");ta=tn,tp>0&&(tf[tp-1]=Math.ceil(tf[tp-1]/2)),tp>1&&(tf[tp-2]=Math.ceil(tf[tp-2]/2))}else if(!ta)throw Error("Unpacked shape is needed when using channels > 1");return{width:tl,height:tc,channels:ti,isPacked:tu,shape:tf,strides:to.ShapeUtil.computeStrides(tf),unpackedShape:ta,reversedWH:ts&&ts.reverseWH}}},5702:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.TextureManager=void 0;let to=ti(6231);tn.TextureManager=class{constructor(tr,tn,ti,to){this.glContext=tr,this.layoutStrategy=tn,this.profiler=ti,this.config=to,this.pendingRead=new Map,to.reuseTextures&&(this.inUseTextures=new Map,this.idleTextures=new Map,this.textureLookup=new Map)}createTextureFromLayout(tr,tn,ti,ta){let ts,tu;let tl=this.toEncoderType(tr),tc=this.glContext.getEncoder(tl,tn.channels||1,ta);if(tn.isPacked&&1===ta)throw Error("not implemented");let tp=tn.width,tf=tn.height;if(this.config.reuseTextures){ts=`${tp}x${tf}_${tc.format}_${tc.internalFormat}_${tc.textureType}`,(tu=this.inUseTextures.get(ts))||(tu=[],this.inUseTextures.set(ts,tu));let tn=this.idleTextures.get(ts);if(tn&&tn.length>0){let to=tn.pop();return tu.push(to),1===ta&&this.glContext.updateTexture(to,tp,tf,tc,this.toTextureData(tr,ti)),to}}to.Logger.verbose("TextureManager",`Creating new texture of size ${tn.width}x${tn.height}`);let td=this.glContext.allocateTexture(tp,tf,tc,this.toTextureData(tr,ti));return this.config.reuseTextures&&(tu.push(td),this.textureLookup.set(td,ts)),td}readTexture(tr,tn,ti){return ti||(ti=1),this.profiler.event("backend","TextureManager.readTexture",()=>{let to=tr.shape.reduce((tr,tn)=>tr*tn)*ti,ta=this.glContext.readTexture(tr.texture,tr.width,tr.height,to,this.toEncoderType(tn),ti);return this.toTensorData(tn,ta)})}async readTextureAsync(tr,tn,ti){let to=tr.tensor.dataId;if(ti||(ti=1),this.pendingRead.has(to)){let tr=this.pendingRead.get(to);return new Promise(tn=>null==tr?void 0:tr.push(tn))}return this.profiler.event("backend","TextureManager.readTextureAsync",async()=>{this.pendingRead.set(to,[]);let ta=tr.shape.reduce((tr,tn)=>tr*tn)*ti;await this.glContext.createAndWaitForFence();let ts=this.glContext.readTexture(tr.texture,tr.width,tr.height,ta,this.toEncoderType(tn),ti),tu=this.toTensorData(tn,ts),tl=this.pendingRead.get(to);return this.pendingRead.delete(to),null==tl||tl.forEach(tr=>tr(tu)),tu})}readUint8TextureAsFloat(tr){return this.profiler.event("backend","TextureManager.readUint8TextureAsFloat",()=>{let tn=tr.shape.reduce((tr,tn)=>tr*tn),ti=this.glContext.readTexture(tr.texture,tr.width,tr.height,4*tn,"byte",4);return new Float32Array(ti.buffer,ti.byteOffset,tn)})}releaseTexture(tr,tn){let ti;if(this.config.reuseTextures&&(ti=this.textureLookup.get(tr.texture))){tn&&this.textureLookup.delete(ti);let to=this.inUseTextures.get(ti);if(to){let tn=to.indexOf(tr.texture);if(-1!==tn){to.splice(tn,1);let ta=this.idleTextures.get(ti);ta||(ta=[],this.idleTextures.set(ti,ta)),ta.push(tr.texture)}}}ti&&!tn||(to.Logger.verbose("TextureManager",`Deleting texture of size ${tr.width}x${tr.height}`),this.glContext.deleteTexture(tr.texture))}toTensorData(tr,tn){switch(tr){case"int16":return tn instanceof Int16Array?tn:Int16Array.from(tn);case"int32":return tn instanceof Int32Array?tn:Int32Array.from(tn);case"int8":return tn instanceof Int8Array?tn:Int8Array.from(tn);case"uint16":return tn instanceof Uint16Array?tn:Uint16Array.from(tn);case"uint32":return tn instanceof Uint32Array?tn:Uint32Array.from(tn);case"uint8":case"bool":return tn instanceof Uint8Array?tn:Uint8Array.from(tn);case"float32":return tn instanceof Float32Array?tn:Float32Array.from(tn);case"float64":return tn instanceof Float64Array?tn:Float64Array.from(tn);default:throw Error(`TensorData type ${tr} is not supported`)}}toTextureData(tr,tn){if(tn)return tn instanceof Float32Array?tn:new Float32Array(tn)}toEncoderType(tr){return"float"}clearActiveTextures(){this.glContext.clearActiveTextures()}}},2039:(tr,tn)=>{"use strict";var ti;Object.defineProperty(tn,"__esModule",{value:!0}),tn.TextureType=void 0,(ti=tn.TextureType||(tn.TextureType={}))[ti.unpacked=0]="unpacked",ti[ti.unpackedReversed=1]="unpackedReversed",ti[ti.packed=2]="packed",ti[ti.downloadUint8AsFloat=3]="downloadUint8AsFloat",ti[ti.packedLastDimension=4]="packedLastDimension"},9390:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.getGlChannels=tn.getCoordsDataType=tn.getSqueezedParams=tn.squeezeInputShape=tn.generateShaderFuncNameFromInputSamplerNameAtOutCoords=tn.generateShaderFuncNameFromInputSamplerName=tn.repeatedTry=tn.getPackedShape=void 0;let to=ti(2517);tn.getPackedShape=function(tr){let tn=tr.length;return tr.slice(0,tn-1).concat(tr[tn-1]/4)},tn.repeatedTry=async function(tr,tn=tr=>0,ti){return new Promise((to,ta)=>{let ts=0,tu=()=>{if(tr())return void to();ts++;let tl=tn(ts);null!=ti&&ts>=ti?ta():setTimeout(tu,tl)};tu()})},tn.generateShaderFuncNameFromInputSamplerName=function(tr){return(0,to.assert)(void 0!==tr&&0!==tr.length,()=>"empty string found for sampler name"),"get"+tr.charAt(0).toUpperCase()+tr.slice(1)},tn.generateShaderFuncNameFromInputSamplerNameAtOutCoords=function(tr){return(0,to.assert)(void 0!==tr&&0!==tr.length,()=>"empty string found for sampler name"),"get"+tr.charAt(0).toUpperCase()+tr.slice(1)+"AtOutCoords"},tn.squeezeInputShape=function(tr,tn){return JSON.parse(JSON.stringify(tr)),tn},tn.getSqueezedParams=function(tr,tn){return tn.map(tn=>tr[tn]).join(", ")},tn.getCoordsDataType=function(tr){if(tr<=1)return"int";if(2===tr)return"ivec2";if(3===tr)return"ivec3";if(4===tr)return"ivec4";if(5===tr)return"ivec5";if(6===tr)return"ivec6";throw Error(`GPU for rank ${tr} is not yet supported`)},tn.getGlChannels=function(tr=6){return["x","y","z","w","u","v"].slice(0,tr)}},7305:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.createNewWebGLContext=tn.createWebGLContext=void 0;let to=ti(6231),ta=ti(1713),ts={};function tu(tr){let tn;let ti=function(){if("undefined"==typeof document){if("undefined"==typeof OffscreenCanvas)throw TypeError("failed to create canvas: OffscreenCanvas is not supported");return new OffscreenCanvas(1,1)}let tr=document.createElement("canvas");return tr.width=1,tr.height=1,tr}(),ts={alpha:!1,depth:!1,antialias:!1,stencil:!1,preserveDrawingBuffer:!1,premultipliedAlpha:!1,failIfMajorPerformanceCaveat:!1};if((!tr||"webgl2"===tr)&&(tn=ti.getContext("webgl2",ts)))try{return new ta.WebGLContext(tn,2)}catch(tr){to.Logger.warning("GlContextFactory",`failed to create WebGLContext using contextId 'webgl2'. Error: ${tr}`)}if((!tr||"webgl"===tr)&&(tn=ti.getContext("webgl",ts)||ti.getContext("experimental-webgl",ts)))try{return new ta.WebGLContext(tn,1)}catch(tr){to.Logger.warning("GlContextFactory",`failed to create WebGLContext using contextId 'webgl' or 'experimental-webgl'. Error: ${tr}`)}throw Error("WebGL is not supported")}tn.createWebGLContext=function tr(tn){let ti;(!tn||"webgl2"===tn)&&"webgl2"in ts?ti=ts.webgl2:(!tn||"webgl"===tn)&&"webgl"in ts&&(ti=ts.webgl),ti=ti||tu(tn),tn=tn||1===ti.version?"webgl":"webgl2";let to=ti.gl;return ts[tn]=ti,to.isContextLost()?(delete ts[tn],tr(tn)):(to.disable(to.DEPTH_TEST),to.disable(to.STENCIL_TEST),to.disable(to.BLEND),to.disable(to.DITHER),to.disable(to.POLYGON_OFFSET_FILL),to.disable(to.SAMPLE_COVERAGE),to.enable(to.SCISSOR_TEST),to.enable(to.CULL_FACE),to.cullFace(to.BACK),ti)},tn.createNewWebGLContext=tu},1713:function(tr,tn,ti){"use strict";var to=this&&this.__createBinding||(Object.create?function(tr,tn,ti,to){void 0===to&&(to=ti);var ta=Object.getOwnPropertyDescriptor(tn,ti);ta&&!("get"in ta?!tn.__esModule:ta.writable||ta.configurable)||(ta={enumerable:!0,get:function(){return tn[ti]}}),Object.defineProperty(tr,to,ta)}:function(tr,tn,ti,to){void 0===to&&(to=ti),tr[to]=tn[ti]}),ta=this&&this.__setModuleDefault||(Object.create?function(tr,tn){Object.defineProperty(tr,"default",{enumerable:!0,value:tn})}:function(tr,tn){tr.default=tn}),ts=this&&this.__importStar||function(tr){if(tr&&tr.__esModule)return tr;var tn={};if(null!=tr)for(var ti in tr)"default"!==ti&&Object.prototype.hasOwnProperty.call(tr,ti)&&to(tn,tr,ti);return ta(tn,tr),tn};Object.defineProperty(tn,"__esModule",{value:!0}),tn.WebGLContext=tn.linearSearchLastTrue=void 0;let tu=ti(1670),tl=ts(ti(7769)),tc=ti(9390);function tp(tr){let tn=0;for(;tnthis.isTimerResultAvailable(tr)),this.getTimerResult(tr)}async createAndWaitForFence(){let tr=this.createFence(this.gl);return this.pollFence(tr)}createFence(tr){let tn;let ti=tr,to=ti.fenceSync(ti.SYNC_GPU_COMMANDS_COMPLETE,0);return tr.flush(),tn=null===to?()=>!0:()=>{let tr=ti.clientWaitSync(to,0,0);return tr===ti.ALREADY_SIGNALED||tr===ti.CONDITION_SATISFIED},{query:to,isFencePassed:tn}}async pollFence(tr){return new Promise(tn=>{this.addItemToPoll(()=>tr.isFencePassed(),()=>tn())})}pollItems(){let tr=tp(this.itemsToPoll.map(tr=>tr.isDoneFn));for(let tn=0;tn<=tr;++tn){let{resolveFn:tr}=this.itemsToPoll[tn];tr()}this.itemsToPoll=this.itemsToPoll.slice(tr+1)}async addItemToPoll(tr,tn){this.itemsToPoll.push({isDoneFn:tr,resolveFn:tn}),this.itemsToPoll.length>1||await (0,tc.repeatedTry)(()=>(this.pollItems(),0===this.itemsToPoll.length))}}},1036:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.ExecutionPlan=void 0;let to=ti(6231);class ta{constructor(tr,tn){this.op=tr,this.node=tn}}tn.ExecutionPlan=class{constructor(tr,tn,ti){this.graph=tr,this.profiler=ti,this.initialize(tn)}initialize(tr){this.profiler.event("session","ExecutionPlan.initialize",()=>{let tn=this.graph.getNodes();if(tn.length!==tr.length)throw Error("The size of nodes and OPs do not match.");this._ops=tr.map((tr,ti)=>new ta(tr,tn[ti])),this.reset(),this._starter=[],this._ops.forEach((tr,tn)=>{let ti=!0;for(let tn of tr.node.inputs)if(!this._values[tn]&&-1===this.graph.getInputIndices().indexOf(tn)){ti=!1;break}ti&&this._starter.push(tn)})})}reset(){this._values=this.graph.getValues().map(tr=>tr.tensor)}async execute(tr,tn){return this.profiler.event("session","ExecutionPlan.execute",async()=>{this.reset();let ti=tr.createInferenceHandler(),ta=this.graph.getInputIndices();if(tn.length!==ta.length)throw Error(`number of input tensors don't match the number of inputs to the model: actual: ${tn.length} expected: ${ta.length}`);tn.forEach((tr,tn)=>{let ti=ta[tn];this._values[ti]=tr});let ts=this._starter.slice(0),tu=this.graph.getValues(),tl=this.graph.getNodes(),tc=0;for(;tcthis._values[tr]);if(-1!==ta.indexOf(void 0))throw Error(`unresolved input detected: op: ${tn.node}`);let tp=ta;to.Logger.verbose("ExecPlan",`Runing op:${tn.node.name} (${tp.map((tr,ti)=>`'${tn.node.inputs[ti]}': ${tr.type}[${tr.dims.join(",")}]`).join(", ")})`);let tf=await this.profiler.event("node",tn.node.name,async()=>tn.op.impl(ti,tp,tn.op.context));if(tf.length!==tn.node.outputs.length)throw Error("the size of output does not match model definition.");tf.forEach((tr,ti)=>{let to=tn.node.outputs[ti];if(this._values[to])throw Error(`output [${to}] already has value: op:${tn.node.name}`);this._values[to]=tr});let td=new Set;tf.forEach((tr,ti)=>{let to=tn.node.outputs[ti];for(let tr of tu[to].to){let tn=tl[tr],ti=!0;for(let tr of tn.inputs)if(!this._values[tr]){ti=!1;break}ti&&td.add(tr)}}),ts.push(...td)}let tp=[];for(let tr=0;tr{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.Graph=void 0;let to=ti(1446),ta=ti(7778),ts=ti(9395),tu=ti(9162),tl=ti(2517);var tc=ts.onnxruntime.experimental.fbs;tn.Graph={from:(tr,tn)=>new td(tr,tn)};class tp{constructor(tr){this._from=void 0,this._to=[],this.tensor=void 0,this.type=void 0,tr&&(this.type=tl.ProtoUtil.tensorValueTypeFromProto(tr.type.tensorType))}get from(){return this._from}get to(){return this._to}}class tf{constructor(tr,tn){tr instanceof to.onnx.NodeProto?(this.name=tr.name,this.opType=tr.opType,this.attributes=new ta.Attribute(tr.attribute)):tr instanceof tc.Node&&(this.name=null!=tn?tn:tr.name(),this.opType=tr.opType(),this.attributes=new ta.Attribute(tl.ProtoUtil.tensorAttributesFromORTFormat(tr))),this.inputs=[],this.outputs=[],this.executeNode=!0}}class td{constructor(tr,tn){if(!tr)throw TypeError("graph is empty");this.buildGraph(tr),this.transformGraph(tn),this.checkIsAcyclic()}getInputIndices(){return this._allInputIndices}getInputNames(){return this._allInputNames}getOutputIndices(){return this._allOutputIndices}getOutputNames(){return this._allOutputNames}getValues(){return this._allData}getNodes(){return this._nodes}buildGraph(tr){if(tr instanceof to.onnx.GraphProto)this.buildGraphFromOnnxFormat(tr);else{if(!(tr instanceof tc.Graph))throw TypeError("Graph type is not supported.");this.buildGraphFromOrtFormat(tr)}}buildGraphFromOnnxFormat(tr){let tn=new Map;this._allData=[],this._allInputIndices=[],this._allInputNames=[],this._allOutputIndices=[],this._allOutputNames=[],this._nodes=[];let ti=new Map;if(!tr.input)throw Error("missing information in graph: input");let to=[];for(let ti of tr.input){if(tn.has(ti.name))throw Error(`duplicated input name: ${ti.name}`);let tr=this._allData.push(new tp(ti))-1;tn.set(ti.name,tr),to.push(ti.name)}if(!tr.initializer)throw Error("missing information in graph: initializer");for(let ti of tr.initializer){let tr=tn.get(ti.name);if(void 0===tr){let to=new tp;to.type={shape:{dims:tl.ProtoUtil.tensorDimsFromProto(ti.dims)},tensorType:tl.ProtoUtil.tensorDataTypeFromProto(ti.dataType)},tr=this._allData.push(to)-1,tn.set(ti.name,tr)}this._allData[tr]._from=-1,this._allData[tr].tensor=tu.Tensor.fromProto(ti)}for(let tr=0;tr{this._allData[tn]._to.forEach(tn=>{tr.add(tn)})});let tn=Array.from(tr),ti=Array(this._nodes.length).fill("white");for(;tn.length>0;){let tr=tn.pop();"gray"===ti[tr]?ti[tr]="black":(tn.push(tr),ti[tr]="gray",this._nodes[tr].outputs.forEach(to=>{let ta=this._allData[to];if(void 0!==ta.tensor)throw Error("node outputs should not be initialized");if(ta._from!==tr)throw Error("from property of the Value object doesn't match index of Node being processed");ta._to.forEach(tr=>{if("gray"===ti[tr])throw Error("model graph is cyclic");"white"===ti[tr]&&tn.push(tr)})}))}}transformGraph(tr){this.removeAllIdentityNodes(),this.removeAllDropoutNodes(),this.fuseConvActivationNodes(),tr&&tr.transformGraph(this),this.finalizeGraph()}finalizeGraph(){let tr=0;for(let tn=0;tn0&&(this._nodes[tn].inputs.forEach(ti=>{let to=this._allData[ti]._to.indexOf(tn+tr);-1!==to&&(this._allData[ti]._to[to]=tn)}),this._nodes[tn].outputs.forEach(ti=>{this._allData[ti]._from&&this._allData[ti]._from===tn+tr&&(this._allData[ti]._from=tn)})):(tr++,this._nodes[tn].outputs.forEach(tr=>{this._allData[tr]._from=-2}),this._nodes.splice(tn,1),tn--);tr=0;for(let tn=0;tn0){let ti=-1;void 0!==this._allData[tn].from&&-1!==this._allData[tn].from?-1!==(ti=this._nodes[this._allData[tn].from].outputs.indexOf(tn+tr))&&(this._nodes[this._allData[tn].from].outputs[ti]=tn):-1!==(ti=this._allInputIndices.indexOf(tn+tr))&&(this._allInputIndices[ti]=tn),this._allData[tn].to.forEach(to=>{-1!==(ti=this._nodes[to].inputs.indexOf(tn+tr))&&(this._nodes[to].inputs[ti]=tn)}),0===this._allData[tn].to.length&&-1!==(ti=this._allOutputIndices.indexOf(tn+tr))&&(this._allOutputIndices[ti]=tn)}}else tr++,this._allData.splice(tn,1),tn--}deleteNode(tr){let tn=this._nodes[tr];if(tn.outputs.length>1){for(let tr=1;tr0)throw Error("Node deletion with more than one output connected to other nodes is not supported. ")}tn.executeNode=!1;let ti=tn.inputs[0],to=tn.outputs[0],ta=this._allData[to].to,ts=this._allData[ti].to.indexOf(tr);if(-1===ts)throw Error("The Value object doesn't have the current Node in it's 'to' property ");this._allData[ti].to.splice(ts,1),this._allData[to]._to=[];let tu=this._allOutputIndices.indexOf(to);if(-1!==tu&&(this._allOutputIndices[tu]=ti),ta&&ta.length>0)for(let tr of ta){let tn=this._nodes[tr].inputs.indexOf(to);if(-1===tn)throw Error("The Node object doesn't have the output Value in it's 'inputs' property ");this._nodes[tr].inputs[tn]=ti,this._allData[ti].to.push(tr)}}removeAllDropoutNodes(){let tr=0;for(let tn of this._nodes){if("Dropout"===tn.opType){if(1!==tn.inputs.length)throw Error("Dropout nodes should only contain one input. ");if(1!==tn.outputs.length&&2!==tn.outputs.length)throw Error("Dropout nodes should contain either 1 or 2 output(s)");if(2===tn.outputs.length&&0!==this._allData[tn.outputs[1]]._to.length)throw Error("Dropout nodes's second output should not be referenced by other nodes");this.deleteNode(tr)}tr++}}removeAllIdentityNodes(){let tr=0;for(let tn of this._nodes)"Identity"===tn.opType&&this.deleteNode(tr),tr++}isActivation(tr){switch(tr.opType){case"Relu":case"Sigmoid":case"Clip":return!0;default:return!1}}fuseConvActivationNodes(){for(let tr of this._nodes)if("Conv"===tr.opType){let tn=this._allData[tr.outputs[0]]._to;if(1===tn.length&&this.isActivation(this._nodes[tn[0]])){let ti=this._nodes[tn[0]];if("Clip"===ti.opType){if(1===ti.inputs.length)try{tr.attributes.set("activation_params","floats",[ti.attributes.getFloat("min"),ti.attributes.getFloat("max")])}catch(tn){tr.attributes.set("activation_params","floats",[tl.MIN_CLIP,tl.MAX_CLIP])}else{if(!(ti.inputs.length>=3&&void 0!==this._allData[ti.inputs[1]].tensor&&void 0!==this._allData[ti.inputs[2]].tensor))continue;tr.attributes.set("activation_params","floats",[this._allData[ti.inputs[1]].tensor.floatData[0],this._allData[ti.inputs[2]].tensor.floatData[0]])}}tr.attributes.set("activation","string",ti.opType),this.deleteNode(tn[0])}}}}},6231:(tr,tn)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.now=tn.Profiler=tn.Logger=void 0;let ti={verbose:1e3,info:2e3,warning:4e3,error:5e3,fatal:6e3},to={none:new class{log(tr,tn,ti){}},console:new class{log(tr,tn,ti){console.log(`${this.color(tr)} ${ti?"\x1b[35m"+ti+"\x1b[0m ":""}${tn}`)}color(tr){switch(tr){case"verbose":return"\x1b[34;40mv\x1b[0m";case"info":return"\x1b[32mi\x1b[0m";case"warning":return"\x1b[30;43mw\x1b[0m";case"error":return"\x1b[31;40me\x1b[0m";case"fatal":return"\x1b[101mf\x1b[0m";default:throw Error(`unsupported severity: ${tr}`)}}}},ta={provider:"console",minimalSeverity:"warning",logDateTime:!0,logSourceLocation:!1},ts={"":ta};function tu(tr,tn,ti,to){var ta;if(void 0===tn)return ta=tr,{verbose:tu.verbose.bind(null,ta),info:tu.info.bind(null,ta),warning:tu.warning.bind(null,ta),error:tu.error.bind(null,ta),fatal:tu.fatal.bind(null,ta)};if(void 0===ti)tl(tr,tn);else if("number"==typeof ti&&void 0===to)tl(tr,tn);else if("string"==typeof ti&&void 0===to)tl(tr,ti,0,tn);else{if("string"!=typeof ti||"number"!=typeof to)throw TypeError("input is valid");tl(tr,ti,0,tn)}}function tl(tr,tn,ta,tu){let tl=ts[tu||""]||ts[""];ti[tr]{tu.then(async tn=>{ta&&await ta.end(),tr(tn)},async tr=>{ta&&await ta.end(),tn(tr)})});if(!ts&&ta){let tr=ta.end();if(tr&&"function"==typeof tr.then)return new Promise((tn,ti)=>{tr.then(()=>{tn(tu)},tr=>{ti(tr)})})}return tu}begin(tr,ti,to){if(!this._started)throw Error("profiler is not started yet");if(void 0===to){let to=(0,tn.now)();return this.flush(to),new tc(tr,ti,to,tr=>this.endSync(tr))}{let tn=to.beginTimer();return new tc(tr,ti,0,async tr=>this.end(tr),tn,to)}}async end(tr){let tn=await tr.checkTimer();this._timingEvents.length=this._flushBatchSize||tr-this._flushTime>=this._flushIntervalInMilliseconds){for(let tr=this._flushPointer;this._flushPointerperformance.now():Date.now},2644:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.Model=void 0;let to=ti(5686),ta=ti(1446),ts=ti(7070),tu=ti(9395),tl=ti(2517);var tc=tu.onnxruntime.experimental.fbs;tn.Model=class{constructor(){}load(tr,tn,ti){if(!ti)try{return void this.loadFromOnnxFormat(tr,tn)}catch(tr){if(void 0!==ti)throw tr}this.loadFromOrtFormat(tr,tn)}loadFromOnnxFormat(tr,tn){let ti=ta.onnx.ModelProto.decode(tr);if(3>tl.LongUtil.longToNumber(ti.irVersion))throw Error("only support ONNX model with IR_VERSION>=3");this._opsets=ti.opsetImport.map(tr=>({domain:tr.domain,version:tl.LongUtil.longToNumber(tr.version)})),this._graph=ts.Graph.from(ti.graph,tn)}loadFromOrtFormat(tr,tn){let ti=new to.flatbuffers.ByteBuffer(tr),ta=tc.InferenceSession.getRootAsInferenceSession(ti).model();if(3>tl.LongUtil.longToNumber(ta.irVersion()))throw Error("only support ONNX model with IR_VERSION>=3");this._opsets=[];for(let tr=0;tr{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.FLOAT_TYPES=tn.INT_TYPES=tn.NUMBER_TYPES=void 0,tn.NUMBER_TYPES=["float32","float64","int32","int16","int8","uint16","uint32","uint8"],tn.INT_TYPES=["int32","int16","int8","uint16","uint32","uint8"],tn.FLOAT_TYPES=["float32","float64"]},1047:(tr,tn)=>{"use strict";function ti(tr,tn){if(tn.endsWith("+")){let ti=Number.parseInt(tn.substring(0,tn.length-1),10);return!isNaN(ti)&&ti<=tr}if(2===tn.split("-").length){let ti=tn.split("-"),to=Number.parseInt(ti[0],10),ta=Number.parseInt(ti[1],10);return!isNaN(to)&&!isNaN(ta)&&to<=tr&&tr<=ta}return Number.parseInt(tn,10)===tr}Object.defineProperty(tn,"__esModule",{value:!0}),tn.resolveOperator=void 0,tn.resolveOperator=function(tr,tn,to){for(let ta of to){let to=ta[0],ts=ta[1],tu=ta[2],tl=ta[3],tc=ta[4];if(tr.opType===to){for(let tr of tn)if((tr.domain===ts||"ai.onnx"===tr.domain&&""===ts)&&ti(tr.version,tu))return{opImpl:tl,opInit:tc}}}throw TypeError(`cannot resolve operator '${tr.opType}' with opsets: ${tn.map(tr=>`${tr.domain||"ai.onnx"} v${tr.version}`).join(", ")}`)}},9395:(tr,tn,ti)=>{"use strict";var to,ta;Object.defineProperty(tn,"__esModule",{value:!0}),tn.onnxruntime=void 0;let ts=ti(5686);(function(tr){let tn;!function(tr){tr[tr.UNDEFINED=0]="UNDEFINED",tr[tr.FLOAT=1]="FLOAT",tr[tr.INT=2]="INT",tr[tr.STRING=3]="STRING",tr[tr.TENSOR=4]="TENSOR",tr[tr.GRAPH=5]="GRAPH",tr[tr.FLOATS=6]="FLOATS",tr[tr.INTS=7]="INTS",tr[tr.STRINGS=8]="STRINGS",tr[tr.TENSORS=9]="TENSORS",tr[tr.GRAPHS=10]="GRAPHS",tr[tr.SPARSE_TENSOR=11]="SPARSE_TENSOR",tr[tr.SPARSE_TENSORS=12]="SPARSE_TENSORS"}(tn=tr.AttributeType||(tr.AttributeType={}))})((ta=(to=tn.onnxruntime||(tn.onnxruntime={})).experimental||(to.experimental={})).fbs||(ta.fbs={})),function(tr){!function(tr){!function(tr){let tn;!function(tr){tr[tr.UNKNOWN=0]="UNKNOWN",tr[tr.VALUE=1]="VALUE",tr[tr.PARAM=2]="PARAM"}(tn=tr.DimensionValueType||(tr.DimensionValueType={}))}(tr.fbs||(tr.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tr){!function(tr){let tn;!function(tr){tr[tr.UNDEFINED=0]="UNDEFINED",tr[tr.FLOAT=1]="FLOAT",tr[tr.UINT8=2]="UINT8",tr[tr.INT8=3]="INT8",tr[tr.UINT16=4]="UINT16",tr[tr.INT16=5]="INT16",tr[tr.INT32=6]="INT32",tr[tr.INT64=7]="INT64",tr[tr.STRING=8]="STRING",tr[tr.BOOL=9]="BOOL",tr[tr.FLOAT16=10]="FLOAT16",tr[tr.DOUBLE=11]="DOUBLE",tr[tr.UINT32=12]="UINT32",tr[tr.UINT64=13]="UINT64",tr[tr.COMPLEX64=14]="COMPLEX64",tr[tr.COMPLEX128=15]="COMPLEX128",tr[tr.BFLOAT16=16]="BFLOAT16"}(tn=tr.TensorDataType||(tr.TensorDataType={}))}(tr.fbs||(tr.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tr){!function(tr){let tn;!function(tr){tr[tr.Primitive=0]="Primitive",tr[tr.Fused=1]="Fused"}(tn=tr.NodeType||(tr.NodeType={}))}(tr.fbs||(tr.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tr){!function(tr){let tn;!function(tr){tr[tr.NONE=0]="NONE",tr[tr.tensor_type=1]="tensor_type",tr[tr.sequence_type=2]="sequence_type",tr[tr.map_type=3]="map_type"}(tn=tr.TypeInfoValue||(tr.TypeInfoValue={}))}(tr.fbs||(tr.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsShape(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsShape(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}dim(tn,ti){let to=this.bb.__offset(this.bb_pos,4);return to?(ti||new tr.experimental.fbs.Dimension).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+to)+4*tn),this.bb):null}dimLength(){let tr=this.bb.__offset(this.bb_pos,4);return tr?this.bb.__vector_len(this.bb_pos+tr):0}static startShape(tr){tr.startObject(1)}static addDim(tr,tn){tr.addFieldOffset(0,tn,0)}static createDimVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startDimVector(tr,tn){tr.startVector(4,tn,4)}static endShape(tr){return tr.endObject()}static createShape(tr,tn){return ti.startShape(tr),ti.addDim(tr,tn),ti.endShape(tr)}}tn.Shape=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsDimension(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsDimension(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}value(tn){let ti=this.bb.__offset(this.bb_pos,4);return ti?(tn||new tr.experimental.fbs.DimensionValue).__init(this.bb.__indirect(this.bb_pos+ti),this.bb):null}denotation(tr){let tn=this.bb.__offset(this.bb_pos,6);return tn?this.bb.__string(this.bb_pos+tn,tr):null}static startDimension(tr){tr.startObject(2)}static addValue(tr,tn){tr.addFieldOffset(0,tn,0)}static addDenotation(tr,tn){tr.addFieldOffset(1,tn,0)}static endDimension(tr){return tr.endObject()}static createDimension(tr,tn,to){return ti.startDimension(tr),ti.addValue(tr,tn),ti.addDenotation(tr,to),ti.endDimension(tr)}}tn.Dimension=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsDimensionValue(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsDimensionValue(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}dimType(){let tn=this.bb.__offset(this.bb_pos,4);return tn?this.bb.readInt8(this.bb_pos+tn):tr.experimental.fbs.DimensionValueType.UNKNOWN}dimValue(){let tr=this.bb.__offset(this.bb_pos,6);return tr?this.bb.readInt64(this.bb_pos+tr):this.bb.createLong(0,0)}dimParam(tr){let tn=this.bb.__offset(this.bb_pos,8);return tn?this.bb.__string(this.bb_pos+tn,tr):null}static startDimensionValue(tr){tr.startObject(3)}static addDimType(tn,ti){tn.addFieldInt8(0,ti,tr.experimental.fbs.DimensionValueType.UNKNOWN)}static addDimValue(tr,tn){tr.addFieldInt64(1,tn,tr.createLong(0,0))}static addDimParam(tr,tn){tr.addFieldOffset(2,tn,0)}static endDimensionValue(tr){return tr.endObject()}static createDimensionValue(tr,tn,to,ta){return ti.startDimensionValue(tr),ti.addDimType(tr,tn),ti.addDimValue(tr,to),ti.addDimParam(tr,ta),ti.endDimensionValue(tr)}}tn.DimensionValue=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsTensorTypeAndShape(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsTensorTypeAndShape(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}elemType(){let tn=this.bb.__offset(this.bb_pos,4);return tn?this.bb.readInt32(this.bb_pos+tn):tr.experimental.fbs.TensorDataType.UNDEFINED}shape(tn){let ti=this.bb.__offset(this.bb_pos,6);return ti?(tn||new tr.experimental.fbs.Shape).__init(this.bb.__indirect(this.bb_pos+ti),this.bb):null}static startTensorTypeAndShape(tr){tr.startObject(2)}static addElemType(tn,ti){tn.addFieldInt32(0,ti,tr.experimental.fbs.TensorDataType.UNDEFINED)}static addShape(tr,tn){tr.addFieldOffset(1,tn,0)}static endTensorTypeAndShape(tr){return tr.endObject()}static createTensorTypeAndShape(tr,tn,to){return ti.startTensorTypeAndShape(tr),ti.addElemType(tr,tn),ti.addShape(tr,to),ti.endTensorTypeAndShape(tr)}}tn.TensorTypeAndShape=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsMapType(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsMapType(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}keyType(){let tn=this.bb.__offset(this.bb_pos,4);return tn?this.bb.readInt32(this.bb_pos+tn):tr.experimental.fbs.TensorDataType.UNDEFINED}valueType(tn){let ti=this.bb.__offset(this.bb_pos,6);return ti?(tn||new tr.experimental.fbs.TypeInfo).__init(this.bb.__indirect(this.bb_pos+ti),this.bb):null}static startMapType(tr){tr.startObject(2)}static addKeyType(tn,ti){tn.addFieldInt32(0,ti,tr.experimental.fbs.TensorDataType.UNDEFINED)}static addValueType(tr,tn){tr.addFieldOffset(1,tn,0)}static endMapType(tr){return tr.endObject()}static createMapType(tr,tn,to){return ti.startMapType(tr),ti.addKeyType(tr,tn),ti.addValueType(tr,to),ti.endMapType(tr)}}tn.MapType=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsSequenceType(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsSequenceType(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}elemType(tn){let ti=this.bb.__offset(this.bb_pos,4);return ti?(tn||new tr.experimental.fbs.TypeInfo).__init(this.bb.__indirect(this.bb_pos+ti),this.bb):null}static startSequenceType(tr){tr.startObject(1)}static addElemType(tr,tn){tr.addFieldOffset(0,tn,0)}static endSequenceType(tr){return tr.endObject()}static createSequenceType(tr,tn){return ti.startSequenceType(tr),ti.addElemType(tr,tn),ti.endSequenceType(tr)}}tn.SequenceType=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tr){(tr.fbs||(tr.fbs={})).EdgeEnd=class{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}nodeIndex(){return this.bb.readUint32(this.bb_pos)}srcArgIndex(){return this.bb.readInt32(this.bb_pos+4)}dstArgIndex(){return this.bb.readInt32(this.bb_pos+8)}static createEdgeEnd(tr,tn,ti,to){return tr.prep(4,12),tr.writeInt32(to),tr.writeInt32(ti),tr.writeInt32(tn),tr.offset()}}}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsNodeEdge(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsNodeEdge(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}nodeIndex(){let tr=this.bb.__offset(this.bb_pos,4);return tr?this.bb.readUint32(this.bb_pos+tr):0}inputEdges(tn,ti){let to=this.bb.__offset(this.bb_pos,6);return to?(ti||new tr.experimental.fbs.EdgeEnd).__init(this.bb.__vector(this.bb_pos+to)+12*tn,this.bb):null}inputEdgesLength(){let tr=this.bb.__offset(this.bb_pos,6);return tr?this.bb.__vector_len(this.bb_pos+tr):0}outputEdges(tn,ti){let to=this.bb.__offset(this.bb_pos,8);return to?(ti||new tr.experimental.fbs.EdgeEnd).__init(this.bb.__vector(this.bb_pos+to)+12*tn,this.bb):null}outputEdgesLength(){let tr=this.bb.__offset(this.bb_pos,8);return tr?this.bb.__vector_len(this.bb_pos+tr):0}static startNodeEdge(tr){tr.startObject(3)}static addNodeIndex(tr,tn){tr.addFieldInt32(0,tn,0)}static addInputEdges(tr,tn){tr.addFieldOffset(1,tn,0)}static startInputEdgesVector(tr,tn){tr.startVector(12,tn,4)}static addOutputEdges(tr,tn){tr.addFieldOffset(2,tn,0)}static startOutputEdgesVector(tr,tn){tr.startVector(12,tn,4)}static endNodeEdge(tr){return tr.endObject()}static createNodeEdge(tr,tn,to,ta){return ti.startNodeEdge(tr),ti.addNodeIndex(tr,tn),ti.addInputEdges(tr,to),ti.addOutputEdges(tr,ta),ti.endNodeEdge(tr)}}tn.NodeEdge=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsNode(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsNode(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}name(tr){let tn=this.bb.__offset(this.bb_pos,4);return tn?this.bb.__string(this.bb_pos+tn,tr):null}docString(tr){let tn=this.bb.__offset(this.bb_pos,6);return tn?this.bb.__string(this.bb_pos+tn,tr):null}domain(tr){let tn=this.bb.__offset(this.bb_pos,8);return tn?this.bb.__string(this.bb_pos+tn,tr):null}sinceVersion(){let tr=this.bb.__offset(this.bb_pos,10);return tr?this.bb.readInt32(this.bb_pos+tr):0}index(){let tr=this.bb.__offset(this.bb_pos,12);return tr?this.bb.readUint32(this.bb_pos+tr):0}opType(tr){let tn=this.bb.__offset(this.bb_pos,14);return tn?this.bb.__string(this.bb_pos+tn,tr):null}type(){let tn=this.bb.__offset(this.bb_pos,16);return tn?this.bb.readInt32(this.bb_pos+tn):tr.experimental.fbs.NodeType.Primitive}executionProviderType(tr){let tn=this.bb.__offset(this.bb_pos,18);return tn?this.bb.__string(this.bb_pos+tn,tr):null}inputs(tr,tn){let ti=this.bb.__offset(this.bb_pos,20);return ti?this.bb.__string(this.bb.__vector(this.bb_pos+ti)+4*tr,tn):null}inputsLength(){let tr=this.bb.__offset(this.bb_pos,20);return tr?this.bb.__vector_len(this.bb_pos+tr):0}outputs(tr,tn){let ti=this.bb.__offset(this.bb_pos,22);return ti?this.bb.__string(this.bb.__vector(this.bb_pos+ti)+4*tr,tn):null}outputsLength(){let tr=this.bb.__offset(this.bb_pos,22);return tr?this.bb.__vector_len(this.bb_pos+tr):0}attributes(tn,ti){let to=this.bb.__offset(this.bb_pos,24);return to?(ti||new tr.experimental.fbs.Attribute).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+to)+4*tn),this.bb):null}attributesLength(){let tr=this.bb.__offset(this.bb_pos,24);return tr?this.bb.__vector_len(this.bb_pos+tr):0}inputArgCounts(tr){let tn=this.bb.__offset(this.bb_pos,26);return tn?this.bb.readInt32(this.bb.__vector(this.bb_pos+tn)+4*tr):0}inputArgCountsLength(){let tr=this.bb.__offset(this.bb_pos,26);return tr?this.bb.__vector_len(this.bb_pos+tr):0}inputArgCountsArray(){let tr=this.bb.__offset(this.bb_pos,26);return tr?new Int32Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+tr),this.bb.__vector_len(this.bb_pos+tr)):null}implicitInputs(tr,tn){let ti=this.bb.__offset(this.bb_pos,28);return ti?this.bb.__string(this.bb.__vector(this.bb_pos+ti)+4*tr,tn):null}implicitInputsLength(){let tr=this.bb.__offset(this.bb_pos,28);return tr?this.bb.__vector_len(this.bb_pos+tr):0}static startNode(tr){tr.startObject(13)}static addName(tr,tn){tr.addFieldOffset(0,tn,0)}static addDocString(tr,tn){tr.addFieldOffset(1,tn,0)}static addDomain(tr,tn){tr.addFieldOffset(2,tn,0)}static addSinceVersion(tr,tn){tr.addFieldInt32(3,tn,0)}static addIndex(tr,tn){tr.addFieldInt32(4,tn,0)}static addOpType(tr,tn){tr.addFieldOffset(5,tn,0)}static addType(tn,ti){tn.addFieldInt32(6,ti,tr.experimental.fbs.NodeType.Primitive)}static addExecutionProviderType(tr,tn){tr.addFieldOffset(7,tn,0)}static addInputs(tr,tn){tr.addFieldOffset(8,tn,0)}static createInputsVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startInputsVector(tr,tn){tr.startVector(4,tn,4)}static addOutputs(tr,tn){tr.addFieldOffset(9,tn,0)}static createOutputsVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startOutputsVector(tr,tn){tr.startVector(4,tn,4)}static addAttributes(tr,tn){tr.addFieldOffset(10,tn,0)}static createAttributesVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startAttributesVector(tr,tn){tr.startVector(4,tn,4)}static addInputArgCounts(tr,tn){tr.addFieldOffset(11,tn,0)}static createInputArgCountsVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addInt32(tn[ti]);return tr.endVector()}static startInputArgCountsVector(tr,tn){tr.startVector(4,tn,4)}static addImplicitInputs(tr,tn){tr.addFieldOffset(12,tn,0)}static createImplicitInputsVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startImplicitInputsVector(tr,tn){tr.startVector(4,tn,4)}static endNode(tr){return tr.endObject()}static createNode(tr,tn,to,ta,ts,tu,tl,tc,tp,tf,td,th,tg,tb){return ti.startNode(tr),ti.addName(tr,tn),ti.addDocString(tr,to),ti.addDomain(tr,ta),ti.addSinceVersion(tr,ts),ti.addIndex(tr,tu),ti.addOpType(tr,tl),ti.addType(tr,tc),ti.addExecutionProviderType(tr,tp),ti.addInputs(tr,tf),ti.addOutputs(tr,td),ti.addAttributes(tr,th),ti.addInputArgCounts(tr,tg),ti.addImplicitInputs(tr,tb),ti.endNode(tr)}}tn.Node=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsValueInfo(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsValueInfo(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}name(tr){let tn=this.bb.__offset(this.bb_pos,4);return tn?this.bb.__string(this.bb_pos+tn,tr):null}docString(tr){let tn=this.bb.__offset(this.bb_pos,6);return tn?this.bb.__string(this.bb_pos+tn,tr):null}type(tn){let ti=this.bb.__offset(this.bb_pos,8);return ti?(tn||new tr.experimental.fbs.TypeInfo).__init(this.bb.__indirect(this.bb_pos+ti),this.bb):null}static startValueInfo(tr){tr.startObject(3)}static addName(tr,tn){tr.addFieldOffset(0,tn,0)}static addDocString(tr,tn){tr.addFieldOffset(1,tn,0)}static addType(tr,tn){tr.addFieldOffset(2,tn,0)}static endValueInfo(tr){return tr.endObject()}static createValueInfo(tr,tn,to,ta){return ti.startValueInfo(tr),ti.addName(tr,tn),ti.addDocString(tr,to),ti.addType(tr,ta),ti.endValueInfo(tr)}}tn.ValueInfo=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsTypeInfo(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsTypeInfo(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}denotation(tr){let tn=this.bb.__offset(this.bb_pos,4);return tn?this.bb.__string(this.bb_pos+tn,tr):null}valueType(){let tn=this.bb.__offset(this.bb_pos,6);return tn?this.bb.readUint8(this.bb_pos+tn):tr.experimental.fbs.TypeInfoValue.NONE}value(tr){let tn=this.bb.__offset(this.bb_pos,8);return tn?this.bb.__union(tr,this.bb_pos+tn):null}static startTypeInfo(tr){tr.startObject(3)}static addDenotation(tr,tn){tr.addFieldOffset(0,tn,0)}static addValueType(tn,ti){tn.addFieldInt8(1,ti,tr.experimental.fbs.TypeInfoValue.NONE)}static addValue(tr,tn){tr.addFieldOffset(2,tn,0)}static endTypeInfo(tr){return tr.endObject()}static createTypeInfo(tr,tn,to,ta){return ti.startTypeInfo(tr),ti.addDenotation(tr,tn),ti.addValueType(tr,to),ti.addValue(tr,ta),ti.endTypeInfo(tr)}}tn.TypeInfo=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tr){!function(tr){class tn{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsOperatorSetId(tr,ti){return(ti||new tn).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsOperatorSetId(tr,ti){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(ti||new tn).__init(tr.readInt32(tr.position())+tr.position(),tr)}domain(tr){let tn=this.bb.__offset(this.bb_pos,4);return tn?this.bb.__string(this.bb_pos+tn,tr):null}version(){let tr=this.bb.__offset(this.bb_pos,6);return tr?this.bb.readInt64(this.bb_pos+tr):this.bb.createLong(0,0)}static startOperatorSetId(tr){tr.startObject(2)}static addDomain(tr,tn){tr.addFieldOffset(0,tn,0)}static addVersion(tr,tn){tr.addFieldInt64(1,tn,tr.createLong(0,0))}static endOperatorSetId(tr){return tr.endObject()}static createOperatorSetId(tr,ti,to){return tn.startOperatorSetId(tr),tn.addDomain(tr,ti),tn.addVersion(tr,to),tn.endOperatorSetId(tr)}}tr.OperatorSetId=tn}(tr.fbs||(tr.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsTensor(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsTensor(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}name(tr){let tn=this.bb.__offset(this.bb_pos,4);return tn?this.bb.__string(this.bb_pos+tn,tr):null}docString(tr){let tn=this.bb.__offset(this.bb_pos,6);return tn?this.bb.__string(this.bb_pos+tn,tr):null}dims(tr){let tn=this.bb.__offset(this.bb_pos,8);return tn?this.bb.readInt64(this.bb.__vector(this.bb_pos+tn)+8*tr):this.bb.createLong(0,0)}dimsLength(){let tr=this.bb.__offset(this.bb_pos,8);return tr?this.bb.__vector_len(this.bb_pos+tr):0}dataType(){let tn=this.bb.__offset(this.bb_pos,10);return tn?this.bb.readInt32(this.bb_pos+tn):tr.experimental.fbs.TensorDataType.UNDEFINED}rawData(tr){let tn=this.bb.__offset(this.bb_pos,12);return tn?this.bb.readUint8(this.bb.__vector(this.bb_pos+tn)+tr):0}rawDataLength(){let tr=this.bb.__offset(this.bb_pos,12);return tr?this.bb.__vector_len(this.bb_pos+tr):0}rawDataArray(){let tr=this.bb.__offset(this.bb_pos,12);return tr?new Uint8Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+tr),this.bb.__vector_len(this.bb_pos+tr)):null}stringData(tr,tn){let ti=this.bb.__offset(this.bb_pos,14);return ti?this.bb.__string(this.bb.__vector(this.bb_pos+ti)+4*tr,tn):null}stringDataLength(){let tr=this.bb.__offset(this.bb_pos,14);return tr?this.bb.__vector_len(this.bb_pos+tr):0}static startTensor(tr){tr.startObject(6)}static addName(tr,tn){tr.addFieldOffset(0,tn,0)}static addDocString(tr,tn){tr.addFieldOffset(1,tn,0)}static addDims(tr,tn){tr.addFieldOffset(2,tn,0)}static createDimsVector(tr,tn){tr.startVector(8,tn.length,8);for(let ti=tn.length-1;ti>=0;ti--)tr.addInt64(tn[ti]);return tr.endVector()}static startDimsVector(tr,tn){tr.startVector(8,tn,8)}static addDataType(tn,ti){tn.addFieldInt32(3,ti,tr.experimental.fbs.TensorDataType.UNDEFINED)}static addRawData(tr,tn){tr.addFieldOffset(4,tn,0)}static createRawDataVector(tr,tn){tr.startVector(1,tn.length,1);for(let ti=tn.length-1;ti>=0;ti--)tr.addInt8(tn[ti]);return tr.endVector()}static startRawDataVector(tr,tn){tr.startVector(1,tn,1)}static addStringData(tr,tn){tr.addFieldOffset(5,tn,0)}static createStringDataVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startStringDataVector(tr,tn){tr.startVector(4,tn,4)}static endTensor(tr){return tr.endObject()}static createTensor(tr,tn,to,ta,ts,tu,tl){return ti.startTensor(tr),ti.addName(tr,tn),ti.addDocString(tr,to),ti.addDims(tr,ta),ti.addDataType(tr,ts),ti.addRawData(tr,tu),ti.addStringData(tr,tl),ti.endTensor(tr)}}tn.Tensor=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsSparseTensor(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsSparseTensor(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}values(tn){let ti=this.bb.__offset(this.bb_pos,4);return ti?(tn||new tr.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb_pos+ti),this.bb):null}indices(tn){let ti=this.bb.__offset(this.bb_pos,6);return ti?(tn||new tr.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb_pos+ti),this.bb):null}dims(tr){let tn=this.bb.__offset(this.bb_pos,8);return tn?this.bb.readInt64(this.bb.__vector(this.bb_pos+tn)+8*tr):this.bb.createLong(0,0)}dimsLength(){let tr=this.bb.__offset(this.bb_pos,8);return tr?this.bb.__vector_len(this.bb_pos+tr):0}static startSparseTensor(tr){tr.startObject(3)}static addValues(tr,tn){tr.addFieldOffset(0,tn,0)}static addIndices(tr,tn){tr.addFieldOffset(1,tn,0)}static addDims(tr,tn){tr.addFieldOffset(2,tn,0)}static createDimsVector(tr,tn){tr.startVector(8,tn.length,8);for(let ti=tn.length-1;ti>=0;ti--)tr.addInt64(tn[ti]);return tr.endVector()}static startDimsVector(tr,tn){tr.startVector(8,tn,8)}static endSparseTensor(tr){return tr.endObject()}static createSparseTensor(tr,tn,to,ta){return ti.startSparseTensor(tr),ti.addValues(tr,tn),ti.addIndices(tr,to),ti.addDims(tr,ta),ti.endSparseTensor(tr)}}tn.SparseTensor=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsAttribute(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsAttribute(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}name(tr){let tn=this.bb.__offset(this.bb_pos,4);return tn?this.bb.__string(this.bb_pos+tn,tr):null}docString(tr){let tn=this.bb.__offset(this.bb_pos,6);return tn?this.bb.__string(this.bb_pos+tn,tr):null}type(){let tn=this.bb.__offset(this.bb_pos,8);return tn?this.bb.readInt32(this.bb_pos+tn):tr.experimental.fbs.AttributeType.UNDEFINED}f(){let tr=this.bb.__offset(this.bb_pos,10);return tr?this.bb.readFloat32(this.bb_pos+tr):0}i(){let tr=this.bb.__offset(this.bb_pos,12);return tr?this.bb.readInt64(this.bb_pos+tr):this.bb.createLong(0,0)}s(tr){let tn=this.bb.__offset(this.bb_pos,14);return tn?this.bb.__string(this.bb_pos+tn,tr):null}t(tn){let ti=this.bb.__offset(this.bb_pos,16);return ti?(tn||new tr.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb_pos+ti),this.bb):null}g(tn){let ti=this.bb.__offset(this.bb_pos,18);return ti?(tn||new tr.experimental.fbs.Graph).__init(this.bb.__indirect(this.bb_pos+ti),this.bb):null}floats(tr){let tn=this.bb.__offset(this.bb_pos,20);return tn?this.bb.readFloat32(this.bb.__vector(this.bb_pos+tn)+4*tr):0}floatsLength(){let tr=this.bb.__offset(this.bb_pos,20);return tr?this.bb.__vector_len(this.bb_pos+tr):0}floatsArray(){let tr=this.bb.__offset(this.bb_pos,20);return tr?new Float32Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+tr),this.bb.__vector_len(this.bb_pos+tr)):null}ints(tr){let tn=this.bb.__offset(this.bb_pos,22);return tn?this.bb.readInt64(this.bb.__vector(this.bb_pos+tn)+8*tr):this.bb.createLong(0,0)}intsLength(){let tr=this.bb.__offset(this.bb_pos,22);return tr?this.bb.__vector_len(this.bb_pos+tr):0}strings(tr,tn){let ti=this.bb.__offset(this.bb_pos,24);return ti?this.bb.__string(this.bb.__vector(this.bb_pos+ti)+4*tr,tn):null}stringsLength(){let tr=this.bb.__offset(this.bb_pos,24);return tr?this.bb.__vector_len(this.bb_pos+tr):0}tensors(tn,ti){let to=this.bb.__offset(this.bb_pos,26);return to?(ti||new tr.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+to)+4*tn),this.bb):null}tensorsLength(){let tr=this.bb.__offset(this.bb_pos,26);return tr?this.bb.__vector_len(this.bb_pos+tr):0}graphs(tn,ti){let to=this.bb.__offset(this.bb_pos,28);return to?(ti||new tr.experimental.fbs.Graph).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+to)+4*tn),this.bb):null}graphsLength(){let tr=this.bb.__offset(this.bb_pos,28);return tr?this.bb.__vector_len(this.bb_pos+tr):0}static startAttribute(tr){tr.startObject(13)}static addName(tr,tn){tr.addFieldOffset(0,tn,0)}static addDocString(tr,tn){tr.addFieldOffset(1,tn,0)}static addType(tn,ti){tn.addFieldInt32(2,ti,tr.experimental.fbs.AttributeType.UNDEFINED)}static addF(tr,tn){tr.addFieldFloat32(3,tn,0)}static addI(tr,tn){tr.addFieldInt64(4,tn,tr.createLong(0,0))}static addS(tr,tn){tr.addFieldOffset(5,tn,0)}static addT(tr,tn){tr.addFieldOffset(6,tn,0)}static addG(tr,tn){tr.addFieldOffset(7,tn,0)}static addFloats(tr,tn){tr.addFieldOffset(8,tn,0)}static createFloatsVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addFloat32(tn[ti]);return tr.endVector()}static startFloatsVector(tr,tn){tr.startVector(4,tn,4)}static addInts(tr,tn){tr.addFieldOffset(9,tn,0)}static createIntsVector(tr,tn){tr.startVector(8,tn.length,8);for(let ti=tn.length-1;ti>=0;ti--)tr.addInt64(tn[ti]);return tr.endVector()}static startIntsVector(tr,tn){tr.startVector(8,tn,8)}static addStrings(tr,tn){tr.addFieldOffset(10,tn,0)}static createStringsVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startStringsVector(tr,tn){tr.startVector(4,tn,4)}static addTensors(tr,tn){tr.addFieldOffset(11,tn,0)}static createTensorsVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startTensorsVector(tr,tn){tr.startVector(4,tn,4)}static addGraphs(tr,tn){tr.addFieldOffset(12,tn,0)}static createGraphsVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startGraphsVector(tr,tn){tr.startVector(4,tn,4)}static endAttribute(tr){return tr.endObject()}static createAttribute(tr,tn,to,ta,ts,tu,tl,tc,tp,tf,td,th,tg,tb){return ti.startAttribute(tr),ti.addName(tr,tn),ti.addDocString(tr,to),ti.addType(tr,ta),ti.addF(tr,ts),ti.addI(tr,tu),ti.addS(tr,tl),ti.addT(tr,tc),ti.addG(tr,tp),ti.addFloats(tr,tf),ti.addInts(tr,td),ti.addStrings(tr,th),ti.addTensors(tr,tg),ti.addGraphs(tr,tb),ti.endAttribute(tr)}}tn.Attribute=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsGraph(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsGraph(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}initializers(tn,ti){let to=this.bb.__offset(this.bb_pos,4);return to?(ti||new tr.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+to)+4*tn),this.bb):null}initializersLength(){let tr=this.bb.__offset(this.bb_pos,4);return tr?this.bb.__vector_len(this.bb_pos+tr):0}nodeArgs(tn,ti){let to=this.bb.__offset(this.bb_pos,6);return to?(ti||new tr.experimental.fbs.ValueInfo).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+to)+4*tn),this.bb):null}nodeArgsLength(){let tr=this.bb.__offset(this.bb_pos,6);return tr?this.bb.__vector_len(this.bb_pos+tr):0}nodes(tn,ti){let to=this.bb.__offset(this.bb_pos,8);return to?(ti||new tr.experimental.fbs.Node).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+to)+4*tn),this.bb):null}nodesLength(){let tr=this.bb.__offset(this.bb_pos,8);return tr?this.bb.__vector_len(this.bb_pos+tr):0}maxNodeIndex(){let tr=this.bb.__offset(this.bb_pos,10);return tr?this.bb.readUint32(this.bb_pos+tr):0}nodeEdges(tn,ti){let to=this.bb.__offset(this.bb_pos,12);return to?(ti||new tr.experimental.fbs.NodeEdge).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+to)+4*tn),this.bb):null}nodeEdgesLength(){let tr=this.bb.__offset(this.bb_pos,12);return tr?this.bb.__vector_len(this.bb_pos+tr):0}inputs(tr,tn){let ti=this.bb.__offset(this.bb_pos,14);return ti?this.bb.__string(this.bb.__vector(this.bb_pos+ti)+4*tr,tn):null}inputsLength(){let tr=this.bb.__offset(this.bb_pos,14);return tr?this.bb.__vector_len(this.bb_pos+tr):0}outputs(tr,tn){let ti=this.bb.__offset(this.bb_pos,16);return ti?this.bb.__string(this.bb.__vector(this.bb_pos+ti)+4*tr,tn):null}outputsLength(){let tr=this.bb.__offset(this.bb_pos,16);return tr?this.bb.__vector_len(this.bb_pos+tr):0}sparseInitializers(tn,ti){let to=this.bb.__offset(this.bb_pos,18);return to?(ti||new tr.experimental.fbs.SparseTensor).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+to)+4*tn),this.bb):null}sparseInitializersLength(){let tr=this.bb.__offset(this.bb_pos,18);return tr?this.bb.__vector_len(this.bb_pos+tr):0}static startGraph(tr){tr.startObject(8)}static addInitializers(tr,tn){tr.addFieldOffset(0,tn,0)}static createInitializersVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startInitializersVector(tr,tn){tr.startVector(4,tn,4)}static addNodeArgs(tr,tn){tr.addFieldOffset(1,tn,0)}static createNodeArgsVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startNodeArgsVector(tr,tn){tr.startVector(4,tn,4)}static addNodes(tr,tn){tr.addFieldOffset(2,tn,0)}static createNodesVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startNodesVector(tr,tn){tr.startVector(4,tn,4)}static addMaxNodeIndex(tr,tn){tr.addFieldInt32(3,tn,0)}static addNodeEdges(tr,tn){tr.addFieldOffset(4,tn,0)}static createNodeEdgesVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startNodeEdgesVector(tr,tn){tr.startVector(4,tn,4)}static addInputs(tr,tn){tr.addFieldOffset(5,tn,0)}static createInputsVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startInputsVector(tr,tn){tr.startVector(4,tn,4)}static addOutputs(tr,tn){tr.addFieldOffset(6,tn,0)}static createOutputsVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startOutputsVector(tr,tn){tr.startVector(4,tn,4)}static addSparseInitializers(tr,tn){tr.addFieldOffset(7,tn,0)}static createSparseInitializersVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startSparseInitializersVector(tr,tn){tr.startVector(4,tn,4)}static endGraph(tr){return tr.endObject()}static createGraph(tr,tn,to,ta,ts,tu,tl,tc,tp){return ti.startGraph(tr),ti.addInitializers(tr,tn),ti.addNodeArgs(tr,to),ti.addNodes(tr,ta),ti.addMaxNodeIndex(tr,ts),ti.addNodeEdges(tr,tu),ti.addInputs(tr,tl),ti.addOutputs(tr,tc),ti.addSparseInitializers(tr,tp),ti.endGraph(tr)}}tn.Graph=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsModel(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsModel(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}irVersion(){let tr=this.bb.__offset(this.bb_pos,4);return tr?this.bb.readInt64(this.bb_pos+tr):this.bb.createLong(0,0)}opsetImport(tn,ti){let to=this.bb.__offset(this.bb_pos,6);return to?(ti||new tr.experimental.fbs.OperatorSetId).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+to)+4*tn),this.bb):null}opsetImportLength(){let tr=this.bb.__offset(this.bb_pos,6);return tr?this.bb.__vector_len(this.bb_pos+tr):0}producerName(tr){let tn=this.bb.__offset(this.bb_pos,8);return tn?this.bb.__string(this.bb_pos+tn,tr):null}producerVersion(tr){let tn=this.bb.__offset(this.bb_pos,10);return tn?this.bb.__string(this.bb_pos+tn,tr):null}domain(tr){let tn=this.bb.__offset(this.bb_pos,12);return tn?this.bb.__string(this.bb_pos+tn,tr):null}modelVersion(){let tr=this.bb.__offset(this.bb_pos,14);return tr?this.bb.readInt64(this.bb_pos+tr):this.bb.createLong(0,0)}docString(tr){let tn=this.bb.__offset(this.bb_pos,16);return tn?this.bb.__string(this.bb_pos+tn,tr):null}graph(tn){let ti=this.bb.__offset(this.bb_pos,18);return ti?(tn||new tr.experimental.fbs.Graph).__init(this.bb.__indirect(this.bb_pos+ti),this.bb):null}graphDocString(tr){let tn=this.bb.__offset(this.bb_pos,20);return tn?this.bb.__string(this.bb_pos+tn,tr):null}static startModel(tr){tr.startObject(9)}static addIrVersion(tr,tn){tr.addFieldInt64(0,tn,tr.createLong(0,0))}static addOpsetImport(tr,tn){tr.addFieldOffset(1,tn,0)}static createOpsetImportVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startOpsetImportVector(tr,tn){tr.startVector(4,tn,4)}static addProducerName(tr,tn){tr.addFieldOffset(2,tn,0)}static addProducerVersion(tr,tn){tr.addFieldOffset(3,tn,0)}static addDomain(tr,tn){tr.addFieldOffset(4,tn,0)}static addModelVersion(tr,tn){tr.addFieldInt64(5,tn,tr.createLong(0,0))}static addDocString(tr,tn){tr.addFieldOffset(6,tn,0)}static addGraph(tr,tn){tr.addFieldOffset(7,tn,0)}static addGraphDocString(tr,tn){tr.addFieldOffset(8,tn,0)}static endModel(tr){return tr.endObject()}static createModel(tr,tn,to,ta,ts,tu,tl,tc,tp,tf){return ti.startModel(tr),ti.addIrVersion(tr,tn),ti.addOpsetImport(tr,to),ti.addProducerName(tr,ta),ti.addProducerVersion(tr,ts),ti.addDomain(tr,tu),ti.addModelVersion(tr,tl),ti.addDocString(tr,tc),ti.addGraph(tr,tp),ti.addGraphDocString(tr,tf),ti.endModel(tr)}}tn.Model=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tr){!function(tr){class tn{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsKernelCreateInfos(tr,ti){return(ti||new tn).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsKernelCreateInfos(tr,ti){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(ti||new tn).__init(tr.readInt32(tr.position())+tr.position(),tr)}nodeIndices(tr){let tn=this.bb.__offset(this.bb_pos,4);return tn?this.bb.readUint32(this.bb.__vector(this.bb_pos+tn)+4*tr):0}nodeIndicesLength(){let tr=this.bb.__offset(this.bb_pos,4);return tr?this.bb.__vector_len(this.bb_pos+tr):0}nodeIndicesArray(){let tr=this.bb.__offset(this.bb_pos,4);return tr?new Uint32Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+tr),this.bb.__vector_len(this.bb_pos+tr)):null}kernelDefHashes(tr){let tn=this.bb.__offset(this.bb_pos,6);return tn?this.bb.readUint64(this.bb.__vector(this.bb_pos+tn)+8*tr):this.bb.createLong(0,0)}kernelDefHashesLength(){let tr=this.bb.__offset(this.bb_pos,6);return tr?this.bb.__vector_len(this.bb_pos+tr):0}static startKernelCreateInfos(tr){tr.startObject(2)}static addNodeIndices(tr,tn){tr.addFieldOffset(0,tn,0)}static createNodeIndicesVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addInt32(tn[ti]);return tr.endVector()}static startNodeIndicesVector(tr,tn){tr.startVector(4,tn,4)}static addKernelDefHashes(tr,tn){tr.addFieldOffset(1,tn,0)}static createKernelDefHashesVector(tr,tn){tr.startVector(8,tn.length,8);for(let ti=tn.length-1;ti>=0;ti--)tr.addInt64(tn[ti]);return tr.endVector()}static startKernelDefHashesVector(tr,tn){tr.startVector(8,tn,8)}static endKernelCreateInfos(tr){return tr.endObject()}static createKernelCreateInfos(tr,ti,to){return tn.startKernelCreateInfos(tr),tn.addNodeIndices(tr,ti),tn.addKernelDefHashes(tr,to),tn.endKernelCreateInfos(tr)}}tr.KernelCreateInfos=tn}(tr.fbs||(tr.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsSubGraphSessionState(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsSubGraphSessionState(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}graphId(tr){let tn=this.bb.__offset(this.bb_pos,4);return tn?this.bb.__string(this.bb_pos+tn,tr):null}sessionState(tn){let ti=this.bb.__offset(this.bb_pos,6);return ti?(tn||new tr.experimental.fbs.SessionState).__init(this.bb.__indirect(this.bb_pos+ti),this.bb):null}static startSubGraphSessionState(tr){tr.startObject(2)}static addGraphId(tr,tn){tr.addFieldOffset(0,tn,0)}static addSessionState(tr,tn){tr.addFieldOffset(1,tn,0)}static endSubGraphSessionState(tr){let tn=tr.endObject();return tr.requiredField(tn,4),tn}static createSubGraphSessionState(tr,tn,to){return ti.startSubGraphSessionState(tr),ti.addGraphId(tr,tn),ti.addSessionState(tr,to),ti.endSubGraphSessionState(tr)}}tn.SubGraphSessionState=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsSessionState(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsSessionState(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}kernels(tn){let ti=this.bb.__offset(this.bb_pos,4);return ti?(tn||new tr.experimental.fbs.KernelCreateInfos).__init(this.bb.__indirect(this.bb_pos+ti),this.bb):null}subGraphSessionStates(tn,ti){let to=this.bb.__offset(this.bb_pos,6);return to?(ti||new tr.experimental.fbs.SubGraphSessionState).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+to)+4*tn),this.bb):null}subGraphSessionStatesLength(){let tr=this.bb.__offset(this.bb_pos,6);return tr?this.bb.__vector_len(this.bb_pos+tr):0}static startSessionState(tr){tr.startObject(2)}static addKernels(tr,tn){tr.addFieldOffset(0,tn,0)}static addSubGraphSessionStates(tr,tn){tr.addFieldOffset(1,tn,0)}static createSubGraphSessionStatesVector(tr,tn){tr.startVector(4,tn.length,4);for(let ti=tn.length-1;ti>=0;ti--)tr.addOffset(tn[ti]);return tr.endVector()}static startSubGraphSessionStatesVector(tr,tn){tr.startVector(4,tn,4)}static endSessionState(tr){return tr.endObject()}static createSessionState(tr,tn,to){return ti.startSessionState(tr),ti.addKernels(tr,tn),ti.addSubGraphSessionStates(tr,to),ti.endSessionState(tr)}}tn.SessionState=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={})),function(tr){!function(tn){!function(tn){class ti{constructor(){this.bb=null,this.bb_pos=0}__init(tr,tn){return this.bb_pos=tr,this.bb=tn,this}static getRootAsInferenceSession(tr,tn){return(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static getSizePrefixedRootAsInferenceSession(tr,tn){return tr.setPosition(tr.position()+ts.flatbuffers.SIZE_PREFIX_LENGTH),(tn||new ti).__init(tr.readInt32(tr.position())+tr.position(),tr)}static bufferHasIdentifier(tr){return tr.__has_identifier("ORTM")}ortVersion(tr){let tn=this.bb.__offset(this.bb_pos,4);return tn?this.bb.__string(this.bb_pos+tn,tr):null}model(tn){let ti=this.bb.__offset(this.bb_pos,6);return ti?(tn||new tr.experimental.fbs.Model).__init(this.bb.__indirect(this.bb_pos+ti),this.bb):null}sessionState(tn){let ti=this.bb.__offset(this.bb_pos,8);return ti?(tn||new tr.experimental.fbs.SessionState).__init(this.bb.__indirect(this.bb_pos+ti),this.bb):null}static startInferenceSession(tr){tr.startObject(3)}static addOrtVersion(tr,tn){tr.addFieldOffset(0,tn,0)}static addModel(tr,tn){tr.addFieldOffset(1,tn,0)}static addSessionState(tr,tn){tr.addFieldOffset(2,tn,0)}static endInferenceSession(tr){return tr.endObject()}static finishInferenceSessionBuffer(tr,tn){tr.finish(tn,"ORTM")}static finishSizePrefixedInferenceSessionBuffer(tr,tn){tr.finish(tn,"ORTM",!0)}static createInferenceSession(tr,tn,to,ta){return ti.startInferenceSession(tr),ti.addOrtVersion(tr,tn),ti.addModel(tr,to),ti.addSessionState(tr,ta),ti.endInferenceSession(tr)}}tn.InferenceSession=ti}(tn.fbs||(tn.fbs={}))}(tr.experimental||(tr.experimental={}))}(tn.onnxruntime||(tn.onnxruntime={}))},7448:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.OnnxjsSessionHandler=void 0;let to=ti(1670),ta=ti(9162);tn.OnnxjsSessionHandler=class{constructor(tr){this.session=tr,this.inputNames=this.session.inputNames,this.outputNames=this.session.outputNames}async dispose(){}async run(tr,tn,ti){let ts=new Map;for(let tn in tr)if(Object.hasOwnProperty.call(tr,tn)){let ti=tr[tn];ts.set(tn,new ta.Tensor(ti.dims,ti.type,void 0,void 0,ti.data))}let tu=await this.session.run(ts),tl={};return tu.forEach((tr,tn)=>{tl[tn]=new to.Tensor(tr.type,tr.data,tr.dims)}),tl}startProfiling(){this.session.startProfiling()}endProfiling(){this.session.endProfiling()}}},6919:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.Session=void 0;let to=ti(7067),ta=ti(1296),ts=ti(7091),tu=ti(1036),tl=ti(6231),tc=ti(2644);tn.Session=class{constructor(tr={}){this._initialized=!1,this.backendHint=tr.backendHint,this.profiler=tl.Profiler.create(tr.profiler),this.context={profiler:this.profiler,graphInputTypes:[],graphInputDims:[]}}get inputNames(){return this._model.graph.getInputNames()}get outputNames(){return this._model.graph.getOutputNames()}startProfiling(){this.profiler.start()}endProfiling(){this.profiler.stop()}async loadModel(tr,tn,ti){await this.profiler.event("session","Session.loadModel",async()=>{let tu=await (0,ts.resolveBackend)(this.backendHint);if(this.sessionHandler=tu.createSessionHandler(this.context),this._model=new tc.Model,"string"==typeof tr){let tn=tr.endsWith(".ort");if("undefined"==typeof fetch){let ti=await (0,ta.promisify)(to.readFile)(tr);this.initialize(ti,tn)}else{let ti=await fetch(tr),to=await ti.arrayBuffer();this.initialize(new Uint8Array(to),tn)}}else if(ArrayBuffer.isView(tr))this.initialize(tr);else{let to=new Uint8Array(tr,tn||0,ti||tr.byteLength);this.initialize(to)}})}initialize(tr,tn){if(this._initialized)throw Error("already initialized");this.profiler.event("session","Session.initialize",()=>{let ti=this.sessionHandler.transformGraph?this.sessionHandler:void 0;this._model.load(tr,ti,tn),this.sessionHandler.onGraphInitialized&&this.sessionHandler.onGraphInitialized(this._model.graph),this.initializeOps(this._model.graph),this._executionPlan=new tu.ExecutionPlan(this._model.graph,this._ops,this.profiler)}),this._initialized=!0}async run(tr){if(!this._initialized)throw Error("session not initialized yet");return this.profiler.event("session","Session.run",async()=>{let tn=this.normalizeAndValidateInputs(tr),ti=await this._executionPlan.execute(this.sessionHandler,tn);return this.createOutput(ti)})}normalizeAndValidateInputs(tr){let tn=this._model.graph.getInputNames();if(Array.isArray(tr)){if(tr.length!==tn.length)throw Error(`incorrect input array length: expected ${tn.length} but got ${tr.length}`)}else{if(tr.size!==tn.length)throw Error(`incorrect input map size: expected ${tn.length} but got ${tr.size}`);let ti=Array(tr.size),to=0;for(let ta=0;ta"string"==typeof tr)))throw TypeError("cache should be a string array");tp&&(this.cache=Array(tl))}else{if(void 0!==ts){let tr=th(tn);if(!(ts instanceof tr))throw TypeError(`cache should be type ${tr.name}`)}if(tp){let tr=new ArrayBuffer(tl*function(tr){switch(tr){case"bool":case"int8":case"uint8":return 1;case"int16":case"uint16":return 2;case"int32":case"uint32":case"float32":return 4;case"float64":return 8;default:throw Error(`cannot calculate sizeof() on type ${tr}`)}}(tn));this.cache=function(tr,tn){return new(th(tn))(tr)}(tr,tn)}}}static fromProto(tr){if(!tr)throw Error("cannot construct Value from an empty tensor");let tn=tc.ProtoUtil.tensorDataTypeFromProto(tr.dataType),ti=tc.ProtoUtil.tensorDimsFromProto(tr.dims),to=new tf(ti,tn);if("string"===tn)tr.stringData.forEach((tr,tn)=>{to.data[tn]=(0,tc.decodeUtf8String)(tr)});else if(tr.rawData&&"number"==typeof tr.rawData.byteLength&&tr.rawData.byteLength>0){let tn=to.data,ti=new DataView(tr.rawData.buffer,tr.rawData.byteOffset,tr.rawData.byteLength),ta=td(tr.dataType),ts=tr.rawData.byteLength/ta;if(tr.rawData.byteLength%ta!=0)throw Error("invalid buffer length");if(tn.length!==ts)throw Error("buffer length mismatch");for(let to=0;to0){let tn=to.data,ti=new DataView(tr.rawDataArray().buffer,tr.rawDataArray().byteOffset,tr.rawDataLength()),ta=td(tr.dataType()),ts=tr.rawDataLength()/ta;if(tr.rawDataLength()%ta!=0)throw Error("invalid buffer length");if(tn.length!==ts)throw Error("buffer length mismatch");for(let to=0;to1&&tc>1)return;tu[ts-tl]=Math.max(ti,tc)}return tu}static index(tr,tn){let ti=Array(tn.length);return tp.fillIndex(tr,tn,ti),ti}static fillIndex(tr,tn,ti){let to=tr.length-tn.length;for(let ta=0;ta=0;tr--)to[tr]=tf%ts[tr],tf=Math.floor(tf/ts[tr]);tg||(tp.fillIndex(to,tr.dims,ta),td=tr.get(ta)),tb||(tp.fillIndex(to,tn.dims,tl),th=tn.get(tl)),tc.set(to,ti(td,th))}}return tc}}static isValidBroadcast(tr,tn){let ti=tr.length,to=tn.length;if(ti>to)return!1;for(let ta=1;ta<=ti;ta++)if(1!==tr[ti-ta]&&tr[ti-ta]!==tn[to-ta])return!1;return!0}static getBroadcastDims(tr,tn){let ti=tr.length,to=[];for(let ta=0;ta1&&1===tu&&to.unshift(ts)}return to}}tn.BroadcastUtil=tp,tn.arrayCopyHelper=function(tr,tn,ti,to,ta){if(to<0||to>=tn.length)throw Error("sourceIndex out of bounds");if(ti<0||ti>=tr.length)throw Error("targetIndex out of bounds");if(to+ta>tn.length)throw Error("source indices to be copied are outside bounds");if(ti+ta>tr.length)throw Error("target array is too small to hold result");for(let ts=0;tsts.default.isLong(tr)?tr.toNumber():tr)}static tensorValueTypeFromProto(tr){return{tensorType:tf.tensorDataTypeFromProto(tr.elemType),shape:{dims:tf.tensorDimsFromProto(tr.shape.dim.map(tr=>tr.dimValue))}}}static tensorDimsFromORTFormat(tr){let tn=[];for(let ti=0;titr.length)throw Error(`invalid dimension of ${tn} for sizeFromDimension as Tensor has ${tr.length} dimensions.`);return th.getSizeFromDimensionRange(tr,tn,tr.length)}static sizeToDimension(tr,tn){if(tn<0||tn>tr.length)throw Error(`invalid dimension of ${tn} for sizeToDimension as Tensor has ${tr.length} dimensions.`);return th.getSizeFromDimensionRange(tr,0,tn)}static getSizeFromDimensionRange(tr,tn,ti){let to=1;for(let ta=tn;ta=0;--to)ti[to]=ti[to+1]*tr[to+1];return ti}static transpose(tr){return tr.slice().reverse()}static indicesToOffset(tr,tn,ti){void 0===ti&&(ti=tr.length);let to=0;for(let ta=0;ta=tn)throw Error("unsupported axis for this operation.");return tr<0?tr+tn:tr}static normalizeAxes(tr,tn){return tr.map(tr=>this.normalizeAxis(tr,tn))}static incrementIndex(tr,tn,ti){if(0===tn.length||0===tr.length)throw Error("Index incrementing unsupported for scalar Tensor");if(void 0===ti)ti=tn.length;else if(ti<=0||ti>tn.length)throw Error("Incorrect axis to increment on");for(let to=ti-1;to>=0&&(tr[to]++,!(tr[to]=tr.length)throw Error("the dimension with value zero exceeds the dimension size of the input tensor");to[tu]=tr[tu]}else to[tu]=tn[tu];ts*=to[tu]}}let tu=th.size(tr);if(-1!==ta){if(tu%ts!=0)throw Error(`the input tensor cannot be reshaped to the requested shape. Input shape: [${tr}] Output shape: [${tn}]`);to[ta]=tu/ts}else if(ts!==tu)throw Error("reshapedDims and originalDims don't have matching sizes");return to}static sortBasedOnPerm(tr,tn){return tn?tn.map(tn=>tr[tn]):tr.slice().reverse()}static padShape(tr,tn){let ti=tr.length;return tr.map((tr,to)=>tr+tn[to]+tn[to+ti])}static areEqual(tr,tn){return tr.length===tn.length&&tr.every((tr,ti)=>tr===tn[ti])}static validateDimsAndCalcSize(tr){if(tr.length>6)throw TypeError("Only rank 0 to 6 is supported for tensor shape.");let tn=1;for(let ti of tr){if(!Number.isInteger(ti))throw TypeError(`Invalid shape: ${ti} is not an integer`);if(ti<0||ti>2147483647)throw TypeError(`Invalid shape: length ${ti} is not allowed`);tn*=ti}return tn}static flattenShape(tr,tn){tn<0&&(tn+=tr.length);let ti=tr.reduce((tr,tn)=>tr*tn,1),to=tr.slice(tn).reduce((tr,tn)=>tr*tn,1);return[ti/to,to]}static squeezeShape(tr,tn){let ti=[];tn=th.normalizeAxes(tn,tr.length);for(let to=0;to=0;if(ta&&1!==tr[to])throw Error("squeeze an axis of size different than 1");(0===tn.length&&tr[to]>1||tn.length>0&&!ta)&&ti.push(tr[to])}return ti}static unsqueezeShape(tr,tn){let ti=Array(tr.length+tn.length);ti.fill(0);for(let tr=0;tr=ti.length)throw Error("'axes' has an out of range axis");if(0!==ti[to])throw Error("'axes' has a duplicate axis");ti[to]=1}let to=0;for(let tn=0;tn=tn.length)throw Error("sourceIndex out of bounds");if(ti<0||ti>=tr.length)throw Error("targetIndex out of bounds");if(to+ta>tn.length)throw Error("source indices to be copied are outside bounds");if(ti+ta>tr.length)throw Error("target array is too small to hold result");for(let ts=0;ts=tn.length)throw Error("sourceIndex out of bounds");if(ti<0||ti>=tr.length)throw Error("targetIndex out of bounds");if(to+ta>tn.length)throw Error("source indices to be copied are outside bounds");if(ti+ta>tr.length)throw Error("target array is too small to hold result");for(let tu=0;tu=tn.length)throw Error("sourceIndex out of bounds");if(ti<0||ti>=tr.length)throw Error("targetIndex out of bounds");if(to+ta>tn.length)throw Error("source indices to be copied are outside bounds");if(ti+ta>tr.length)throw Error("target array is too small to hold result");for(let tu=0;tu=tn.length)throw Error("sourceIndex out of bounds");if(ti<0||ti>=tr.length)throw Error("targetIndex out of bounds");if(to+ta>tn.length)throw Error("source indices to be copied are outside bounds");if(ti+ta>tr.length)throw Error("target array is too small to hold result");for(let ts=0;tstn.push(ti));let tu=tb.calcReduceShape(ts,tn,!0),tc=th.size(tu),tf=new tl.Tensor(tu,tr.type),td=th.computeStrides(tu),tg=th.computeStrides(ts),tm=Array(ts.length);for(let ti=0;ti=tn.length)return ts(tr[ta]);let tc=tn[to],tp=tc>=ti.length?1:th.size(ti.slice(tc+1));for(let tf=0;tf0!==tr)}}tn.ReduceUtil=tb;class tm{static adjustPoolAttributes(tr,tn,ti,to,ta,ts){if(!tr&&ti.length!==tn.length-2)throw Error("length of specified kernel shapes should be 2 less than length of input dimensions");if(tr)for(let tr=0;tr=ti.length?ti.push(tn[tr+2]):ti[tr]=tn[tr+2];for(let tr=0;tr=ti[tr]||ts[tr+ti.length]>=ti[tr])throw Error("pads should be smaller than kernel")}}static adjustPadsBasedOnAutoPad(tr,tn,ti,to,ta,ts){if(ts){if(ta.length!==2*(tr.length-2))throw Error("length of pads should be twice the length of data dimensions");if(tn.length!==tr.length-2)throw Error("length of strides should be the length of data dimensions");if(to.length!==tr.length-2)throw Error("length of kernel shapes should be the length of data dimensions");for(let tu=0;tu{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.iterateExtraOptions=void 0,tn.iterateExtraOptions=(tr,ti,to,ta)=>{if("object"==typeof tr&&null!==tr){if(to.has(tr))throw Error("Circular reference in options");to.add(tr)}Object.entries(tr).forEach(([tr,ts])=>{let tu=ti?ti+tr:tr;if("object"==typeof ts)(0,tn.iterateExtraOptions)(ts,tu+".",to,ta);else if("string"==typeof ts||"number"==typeof ts)ta(tu,ts.toString());else{if("boolean"!=typeof ts)throw Error("Can't handle extra config type: "+typeof ts);ta(tu,ts?"1":"0")}})}},2157:function(tr,tn,ti){"use strict";var to,ta=this&&this.__createBinding||(Object.create?function(tr,tn,ti,to){void 0===to&&(to=ti);var ta=Object.getOwnPropertyDescriptor(tn,ti);ta&&!("get"in ta?!tn.__esModule:ta.writable||ta.configurable)||(ta={enumerable:!0,get:function(){return tn[ti]}}),Object.defineProperty(tr,to,ta)}:function(tr,tn,ti,to){void 0===to&&(to=ti),tr[to]=tn[ti]}),ts=this&&this.__setModuleDefault||(Object.create?function(tr,tn){Object.defineProperty(tr,"default",{enumerable:!0,value:tn})}:function(tr,tn){tr.default=tn}),tu=this&&this.__importStar||function(tr){if(tr&&tr.__esModule)return tr;var tn={};if(null!=tr)for(var ti in tr)"default"!==ti&&Object.prototype.hasOwnProperty.call(tr,ti)&&ta(tn,tr,ti);return ts(tn,tr),tn};Object.defineProperty(tn,"__esModule",{value:!0}),tn.endProfiling=tn.run=tn.releaseSession=tn.createSession=tn.createSessionFinalize=tn.createSessionAllocate=tn.initOrt=tn.initWasm=void 0;let tl=ti(1670),tc=tu(ti(349)),tp=ti(6361),tf=()=>!!tl.env.wasm.proxy&&"undefined"!=typeof document,td,th,tg,tb=!1,tm=!1,ty=!1,t_=[],tv=[],tx=[],tw=[],tT=[],tS=[],tO=()=>{if(tb||!tm||ty||!td)throw Error("worker not ready")},tA=tr=>{switch(tr.data.type){case"init-wasm":tb=!1,tr.data.err?(ty=!0,th[1](tr.data.err)):(tm=!0,th[0]());break;case"init-ort":tr.data.err?tg[1](tr.data.err):tg[0]();break;case"create_allocate":tr.data.err?t_.shift()[1](tr.data.err):t_.shift()[0](tr.data.out);break;case"create_finalize":tr.data.err?tv.shift()[1](tr.data.err):tv.shift()[0](tr.data.out);break;case"create":tr.data.err?tx.shift()[1](tr.data.err):tx.shift()[0](tr.data.out);break;case"release":tr.data.err?tw.shift()[1](tr.data.err):tw.shift()[0]();break;case"run":tr.data.err?tT.shift()[1](tr.data.err):tT.shift()[0](tr.data.out);break;case"end-profiling":tr.data.err?tS.shift()[1](tr.data.err):tS.shift()[0]()}},tE="undefined"!=typeof document?null===(to=null==document?void 0:document.currentScript)||void 0===to?void 0:to.src:void 0;tn.initWasm=async()=>{if(tf()){if(tm)return;if(tb)throw Error("multiple calls to 'initWasm()' detected.");if(ty)throw Error("previous call to 'initWasm()' failed.");return tb=!0,void 0===tl.env.wasm.wasmPaths&&tE&&0!==tE.indexOf("blob:")&&(tl.env.wasm.wasmPaths=tE.substr(0,+tE.lastIndexOf("/")+1)),new Promise((tr,tn)=>{null==td||td.terminate(),(td=ti(9710).Z()).onmessage=tA,th=[tr,tn];let to={type:"init-wasm",in:tl.env.wasm};td.postMessage(to)})}return(0,tp.initializeWebAssembly)(tl.env.wasm)},tn.initOrt=async(tr,tn)=>{if(tf())return tO(),new Promise((ti,to)=>{tg=[ti,to];let ta={type:"init-ort",in:{numThreads:tr,loggingLevel:tn}};td.postMessage(ta)});tc.initOrt(tr,tn)},tn.createSessionAllocate=async tr=>tf()?(tO(),new Promise((tn,ti)=>{t_.push([tn,ti]);let to={type:"create_allocate",in:{model:tr}};td.postMessage(to,[tr.buffer])})):tc.createSessionAllocate(tr),tn.createSessionFinalize=async(tr,tn)=>tf()?(tO(),new Promise((ti,to)=>{tv.push([ti,to]);let ta={type:"create_finalize",in:{modeldata:tr,options:tn}};td.postMessage(ta)})):tc.createSessionFinalize(tr,tn),tn.createSession=async(tr,tn)=>tf()?(tO(),new Promise((ti,to)=>{tx.push([ti,to]);let ta={type:"create",in:{model:tr,options:tn}};td.postMessage(ta,[tr.buffer])})):tc.createSession(tr,tn),tn.releaseSession=async tr=>{if(tf())return tO(),new Promise((tn,ti)=>{tw.push([tn,ti]);let to={type:"release",in:tr};td.postMessage(to)});tc.releaseSession(tr)},tn.run=async(tr,tn,ti,to,ta)=>tf()?(tO(),new Promise((ts,tu)=>{tT.push([ts,tu]);let tl={type:"run",in:{sessionId:tr,inputIndices:tn,inputs:ti,outputIndices:to,options:ta}};td.postMessage(tl,tc.extractTransferableBuffers(ti))})):tc.run(tr,tn,ti,to,ta),tn.endProfiling=async tr=>{if(tf())return tO(),new Promise((tn,ti)=>{tS.push([tn,ti]);let to={type:"end-profiling",in:tr};td.postMessage(to)});tc.endProfiling(tr)}},586:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.setRunOptions=void 0;let to=ti(7967),ta=ti(4983),ts=ti(6361);tn.setRunOptions=tr=>{let tn=(0,ts.getInstance)(),ti=0,tu=[],tl=tr||{};try{if(void 0===(null==tr?void 0:tr.logSeverityLevel))tl.logSeverityLevel=2;else if("number"!=typeof tr.logSeverityLevel||!Number.isInteger(tr.logSeverityLevel)||tr.logSeverityLevel<0||tr.logSeverityLevel>4)throw Error(`log serverity level is not valid: ${tr.logSeverityLevel}`);if(void 0===(null==tr?void 0:tr.logVerbosityLevel))tl.logVerbosityLevel=0;else if("number"!=typeof tr.logVerbosityLevel||!Number.isInteger(tr.logVerbosityLevel))throw Error(`log verbosity level is not valid: ${tr.logVerbosityLevel}`);void 0===(null==tr?void 0:tr.terminate)&&(tl.terminate=!1);let ts=0;if(void 0!==(null==tr?void 0:tr.tag)&&(ts=(0,ta.allocWasmString)(tr.tag,tu)),ti=tn._OrtCreateRunOptions(tl.logSeverityLevel,tl.logVerbosityLevel,!!tl.terminate,ts),0===ti)throw Error("Can't create run options");return void 0!==(null==tr?void 0:tr.extra)&&(0,to.iterateExtraOptions)(tr.extra,"",new WeakSet,(tr,to)=>{let ts=(0,ta.allocWasmString)(tr,tu),tl=(0,ta.allocWasmString)(to,tu);if(0!==tn._OrtAddRunConfigEntry(ti,ts,tl))throw Error(`Can't set a run config entry: ${tr} - ${to}`)}),[ti,tu]}catch(tr){throw 0!==ti&&tn._OrtReleaseRunOptions(ti),tu.forEach(tn._free),tr}}},2306:(tr,tn,ti)=>{"use strict";let to;Object.defineProperty(tn,"__esModule",{value:!0}),tn.OnnxruntimeWebAssemblySessionHandler=void 0;let ta=ti(2806),ts=ti(1670),tu=ti(2850),tl=ti(2157);tn.OnnxruntimeWebAssemblySessionHandler=class{async createSessionAllocate(tr){let tn=await fetch(tr),ti=await tn.arrayBuffer();return(0,tl.createSessionAllocate)(new Uint8Array(ti))}async loadModel(tr,tn){if(to||(await (0,tl.initOrt)(ts.env.wasm.numThreads,(tr=>{switch(tr){case"verbose":return 0;case"info":return 1;case"warning":return 2;case"error":return 3;case"fatal":return 4;default:throw Error(`unsupported logging level: ${tr}`)}})(ts.env.logLevel)),to=!0),"string"==typeof tr){if("undefined"==typeof fetch){let ti=await (0,tu.promisify)(ta.readFile)(tr);[this.sessionId,this.inputNames,this.outputNames]=await (0,tl.createSession)(ti,tn)}else{let ti=await this.createSessionAllocate(tr);[this.sessionId,this.inputNames,this.outputNames]=await (0,tl.createSessionFinalize)(ti,tn)}}else[this.sessionId,this.inputNames,this.outputNames]=await (0,tl.createSession)(tr,tn)}async dispose(){return(0,tl.releaseSession)(this.sessionId)}async run(tr,tn,ti){let to=[],ta=[];Object.entries(tr).forEach(tr=>{let tn=tr[0],ti=tr[1],ts=this.inputNames.indexOf(tn);if(-1===ts)throw Error(`invalid input '${tn}'`);to.push(ti),ta.push(ts)});let tu=[];Object.entries(tn).forEach(tr=>{let tn=tr[0],ti=this.outputNames.indexOf(tn);if(-1===ti)throw Error(`invalid output '${tn}'`);tu.push(ti)});let tc=await (0,tl.run)(this.sessionId,ta,to.map(tr=>[tr.type,tr.dims,tr.data]),tu,ti),tp={};for(let tr=0;tr{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.setSessionOptions=void 0;let to=ti(7967),ta=ti(4983),ts=ti(6361);tn.setSessionOptions=tr=>{let tn=(0,ts.getInstance)(),ti=0,tu=[],tl=tr||{};(tr=>{tr.extra||(tr.extra={}),tr.extra.session||(tr.extra.session={});let tn=tr.extra.session;tn.use_ort_model_bytes_directly||(tn.use_ort_model_bytes_directly="1")})(tl);try{void 0===(null==tr?void 0:tr.graphOptimizationLevel)&&(tl.graphOptimizationLevel="all");let tc=(tr=>{switch(tr){case"disabled":return 0;case"basic":return 1;case"extended":return 2;case"all":return 99;default:throw Error(`unsupported graph optimization level: ${tr}`)}})(tl.graphOptimizationLevel);void 0===(null==tr?void 0:tr.enableCpuMemArena)&&(tl.enableCpuMemArena=!0),void 0===(null==tr?void 0:tr.enableMemPattern)&&(tl.enableMemPattern=!0),void 0===(null==tr?void 0:tr.executionMode)&&(tl.executionMode="sequential");let tp=(tr=>{switch(tr){case"sequential":return 0;case"parallel":return 1;default:throw Error(`unsupported execution mode: ${tr}`)}})(tl.executionMode),tf=0;if(void 0!==(null==tr?void 0:tr.logId)&&(tf=(0,ta.allocWasmString)(tr.logId,tu)),void 0===(null==tr?void 0:tr.logSeverityLevel))tl.logSeverityLevel=2;else if("number"!=typeof tr.logSeverityLevel||!Number.isInteger(tr.logSeverityLevel)||tr.logSeverityLevel<0||tr.logSeverityLevel>4)throw Error(`log serverity level is not valid: ${tr.logSeverityLevel}`);if(void 0===(null==tr?void 0:tr.logVerbosityLevel))tl.logVerbosityLevel=0;else if("number"!=typeof tr.logVerbosityLevel||!Number.isInteger(tr.logVerbosityLevel))throw Error(`log verbosity level is not valid: ${tr.logVerbosityLevel}`);if(void 0===(null==tr?void 0:tr.enableProfiling)&&(tl.enableProfiling=!1),ti=tn._OrtCreateSessionOptions(tc,!!tl.enableCpuMemArena,!!tl.enableMemPattern,tp,!!tl.enableProfiling,0,tf,tl.logSeverityLevel,tl.logVerbosityLevel),0===ti)throw Error("Can't create session options");return(null==tr?void 0:tr.executionProviders)&&((tr,tn,ti)=>{for(let to of tn){let tn="string"==typeof to?to:to.name;switch(tn){case"xnnpack":tn="XNNPACK";break;case"wasm":case"cpu":continue;default:throw Error(`not supported EP: ${tn}`)}let tu=(0,ta.allocWasmString)(tn,ti);if(0!==(0,ts.getInstance)()._OrtAppendExecutionProvider(tr,tu))throw Error(`Can't append execution provider: ${tn}`)}})(ti,tr.executionProviders,tu),void 0!==(null==tr?void 0:tr.extra)&&(0,to.iterateExtraOptions)(tr.extra,"",new WeakSet,(tr,to)=>{let ts=(0,ta.allocWasmString)(tr,tu),tl=(0,ta.allocWasmString)(to,tu);if(0!==tn._OrtAddSessionConfigEntry(ti,ts,tl))throw Error(`Can't set a session config entry: ${tr} - ${to}`)}),[ti,tu]}catch(tr){throw 0!==ti&&tn._OrtReleaseSessionOptions(ti),tu.forEach(tn._free),tr}}},4983:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.allocWasmString=void 0;let to=ti(6361);tn.allocWasmString=(tr,tn)=>{let ti=(0,to.getInstance)(),ta=ti.lengthBytesUTF8(tr)+1,ts=ti._malloc(ta);return ti.stringToUTF8(tr,ts,ta),tn.push(ts),ts}},349:(tr,tn,ti)=>{"use strict";Object.defineProperty(tn,"__esModule",{value:!0}),tn.extractTransferableBuffers=tn.endProfiling=tn.run=tn.releaseSession=tn.createSession=tn.createSessionFinalize=tn.createSessionAllocate=tn.initOrt=void 0;let to=ti(586),ta=ti(4919),ts=ti(4983),tu=ti(6361);tn.initOrt=(tr,tn)=>{let ti=(0,tu.getInstance)()._OrtInit(tr,tn);if(0!==ti)throw Error(`Can't initialize onnxruntime. error code = ${ti}`)};let tl=new Map;tn.createSessionAllocate=tr=>{let tn=(0,tu.getInstance)(),ti=tn._malloc(tr.byteLength);return tn.HEAPU8.set(tr,ti),[ti,tr.byteLength]},tn.createSessionFinalize=(tr,tn)=>{let ti=(0,tu.getInstance)(),to=0,ts=0,tc=[];try{if([ts,tc]=(0,ta.setSessionOptions)(tn),to=ti._OrtCreateSession(tr[0],tr[1],ts),0===to)throw Error("Can't create a session")}finally{ti._free(tr[0]),ti._OrtReleaseSessionOptions(ts),tc.forEach(ti._free)}let tp=ti._OrtGetInputCount(to),tf=ti._OrtGetOutputCount(to),td=[],th=[],tg=[],tb=[];for(let tr=0;tr{let to=(0,tn.createSessionAllocate)(tr);return(0,tn.createSessionFinalize)(to,ti)},tn.releaseSession=tr=>{let tn=(0,tu.getInstance)(),ti=tl.get(tr);if(!ti)throw Error("invalid session id");let to=ti[0],ta=ti[1],ts=ti[2];ta.forEach(tn._OrtFree),ts.forEach(tn._OrtFree),tn._OrtReleaseSession(to),tl.delete(tr)};let tc=tr=>{switch(tr){case"int8":return 3;case"uint8":return 2;case"bool":return 9;case"int16":return 5;case"uint16":return 4;case"int32":return 6;case"uint32":return 12;case"float32":return 1;case"float64":return 11;case"string":return 8;case"int64":return 7;case"uint64":return 13;default:throw Error(`unsupported data type: ${tr}`)}},tp=tr=>{switch(tr){case 3:return"int8";case 2:return"uint8";case 9:return"bool";case 5:return"int16";case 4:return"uint16";case 6:return"int32";case 12:return"uint32";case 1:return"float32";case 11:return"float64";case 8:return"string";case 7:return"int64";case 13:return"uint64";default:throw Error(`unsupported data type: ${tr}`)}},tf=tr=>{switch(tr){case"float32":return Float32Array;case"uint8":case"bool":return Uint8Array;case"int8":return Int8Array;case"uint16":return Uint16Array;case"int16":return Int16Array;case"int32":return Int32Array;case"float64":return Float64Array;case"uint32":return Uint32Array;case"int64":return BigInt64Array;case"uint64":return BigUint64Array;default:throw Error(`unsupported type: ${tr}`)}};tn.run=(tr,tn,ti,ta,td)=>{let th=(0,tu.getInstance)(),tg=tl.get(tr);if(!tg)throw Error("invalid session id");let tb=tg[0],tm=tg[1],ty=tg[2],t_=tn.length,tv=ta.length,tx=0,tw=[],tT=[],tS=[];try{[tx,tw]=(0,to.setRunOptions)(td);for(let tr=0;trth.HEAP32[tr++]=tn);let ti=th._OrtCreateTensor(tc(ta),tn,to,tf,tu.length);if(0===ti)throw Error("Can't create a tensor");tT.push(ti)}finally{th.stackRestore(tp)}}let tr=th.stackSave(),tu=th.stackAlloc(4*t_),tl=th.stackAlloc(4*t_),tg=th.stackAlloc(4*tv),tO=th.stackAlloc(4*tv);try{let tr=tu/4,ti=tl/4,to=tg/4,ts=tO/4;for(let to=0;totr*tn);if(ta=tp(ti),"string"===ta){let tr=[],tn=ts/4;for(let ti=0;ti{let tn=(0,tu.getInstance)(),ti=tl.get(tr);if(!ti)throw Error("invalid session id");let to=ti[0],ta=tn._OrtEndProfiling(to);if(0===ta)throw Error("Can't get an profile file name");tn._OrtFree(ta)},tn.extractTransferableBuffers=tr=>{let tn=[];for(let ti of tr){let tr=ti[2];!Array.isArray(tr)&&tr.buffer&&tn.push(tr.buffer)}return tn}},6361:function(tr,tn,ti){"use strict";var to=this&&this.__createBinding||(Object.create?function(tr,tn,ti,to){void 0===to&&(to=ti);var ta=Object.getOwnPropertyDescriptor(tn,ti);ta&&!("get"in ta?!tn.__esModule:ta.writable||ta.configurable)||(ta={enumerable:!0,get:function(){return tn[ti]}}),Object.defineProperty(tr,to,ta)}:function(tr,tn,ti,to){void 0===to&&(to=ti),tr[to]=tn[ti]}),ta=this&&this.__setModuleDefault||(Object.create?function(tr,tn){Object.defineProperty(tr,"default",{enumerable:!0,value:tn})}:function(tr,tn){tr.default=tn}),ts=this&&this.__importStar||function(tr){if(tr&&tr.__esModule)return tr;var tn={};if(null!=tr)for(var ti in tr)"default"!==ti&&Object.prototype.hasOwnProperty.call(tr,ti)&&to(tn,tr,ti);return ta(tn,tr),tn},tu=this&&this.__importDefault||function(tr){return tr&&tr.__esModule?tr:{default:tr}};Object.defineProperty(tn,"__esModule",{value:!0}),tn.dispose=tn.getInstance=tn.initializeWebAssembly=void 0;let tl=ts(ti(6449)),tc=tu(ti(932)),tp=ti(3474),tf,td=!1,th=!1,tg=!1,tb=(tr,tn)=>tn?tr?"ort-wasm-simd-threaded.wasm":"ort-wasm-threaded.wasm":tr?"ort-wasm-simd.wasm":"ort-wasm.wasm";tn.initializeWebAssembly=async tr=>{if(td)return Promise.resolve();if(th)throw Error("multiple calls to 'initializeWebAssembly()' detected.");if(tg)throw Error("previous call to 'initializeWebAssembly()' failed.");th=!0;let tn=tr.initTimeout,to=tr.numThreads,ta=tr.simd,ts=to>1&&(()=>{try{return"undefined"!=typeof SharedArrayBuffer&&("undefined"!=typeof MessageChannel&&(new MessageChannel).port1.postMessage(new SharedArrayBuffer(1)),WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,5,4,1,3,1,1,10,11,1,9,0,65,0,254,16,2,0,26,11])))}catch(tr){return!1}})(),tu=ta&&(()=>{try{return WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,10,30,1,28,0,65,0,253,15,253,12,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,253,186,1,26,11]))}catch(tr){return!1}})(),tm="string"==typeof tr.wasmPaths?tr.wasmPaths:void 0,ty=tb(!1,ts),t_=tb(tu,ts),tv="object"==typeof tr.wasmPaths?tr.wasmPaths[t_]:void 0,tx=!1,tw=[];if(tn>0&&tw.push(new Promise(tr=>{setTimeout(()=>{tx=!0,tr()},tn)})),tw.push(new Promise((tr,tn)=>{let to=ts?tp:tc.default,ta={locateFile:(tr,tn)=>ts&&tr.endsWith(".worker.js")&&"undefined"!=typeof Blob?URL.createObjectURL(new Blob([ti(4154)],{type:"text/javascript"})):tr===ty?null!=tv?tv:(null!=tm?tm:tn)+t_:tn+tr};if(ts){if("undefined"==typeof Blob)ta.mainScriptUrlOrBlob=tl.join("/","ort-wasm-threaded.js");else{let tr=`var ortWasmThreaded=(function(){var _scriptDir;return ${to.toString()}})();`;ta.mainScriptUrlOrBlob=new Blob([tr],{type:"text/javascript"})}}to(ta).then(tn=>{th=!1,td=!0,tf=tn,tr()},tr=>{th=!1,tg=!0,tn(tr)})})),await Promise.race(tw),tx)throw Error(`WebAssembly backend initializing failed due to timeout: ${tn}ms`)},tn.getInstance=()=>{if(td&&tf)return tf;throw Error("WebAssembly is not initialized yet.")},tn.dispose=()=>{var tr;!td||th||tg||(th=!0,null===(tr=tf.PThread)||void 0===tr||tr.terminateAllThreads(),tf=void 0,th=!1,td=!1,tg=!0)}},9710:(tr,tn,ti)=>{"use strict";ti.d(tn,{Z:()=>ts});var to=ti(477),ta=ti.n(to);function ts(){return ta()('/*!\n* ONNX Runtime Web v1.14.0\n* Copyright (c) Microsoft Corporation. All rights reserved.\n* Licensed under the MIT License.\n*/\n(()=>{var t={474:(t,e,n)=>{var _scriptDir,r=(_scriptDir=(_scriptDir="undefined"!=typeof document&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(t){function e(){return j.buffer!=D&&N(j.buffer),P}function r(){return j.buffer!=D&&N(j.buffer),U}function a(){return j.buffer!=D&&N(j.buffer),F}function i(){return j.buffer!=D&&N(j.buffer),I}function o(){return j.buffer!=D&&N(j.buffer),W}var u,c,s;t=t||{},u||(u=void 0!==t?t:{}),u.ready=new Promise((function(t,e){c=t,s=e}));var l,f,p,h,d,y,b=Object.assign({},u),m="./this.program",g=(t,e)=>{throw e},v="object"==typeof window,w="function"==typeof importScripts,_="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node,O=u.ENVIRONMENT_IS_PTHREAD||!1,A="";function S(t){return u.locateFile?u.locateFile(t,A):A+t}if(_){let e;A=w?n(908).dirname(A)+"/":"//",y=()=>{d||(h=n(384),d=n(908))},l=function(t,e){return y(),t=d.normalize(t),h.readFileSync(t,e?void 0:"utf8")},p=t=>((t=l(t,!0)).buffer||(t=new Uint8Array(t)),t),f=(t,e,n)=>{y(),t=d.normalize(t),h.readFile(t,(function(t,r){t?n(t):e(r.buffer)}))},1{if(Q())throw process.exitCode=t,e;e instanceof ct||x("exiting due to exception: "+e),process.exit(t)},u.inspect=function(){return"[Emscripten Module object]"};try{e=n(925)}catch(t){throw console.error(\'The "worker_threads" module is not supported in this node.js build - perhaps a newer version is needed?\'),t}n.g.Worker=e.Worker}else(v||w)&&(w?A=self.location.href:"undefined"!=typeof document&&document.currentScript&&(A=document.currentScript.src),_scriptDir&&(A=_scriptDir),A=0!==A.indexOf("blob:")?A.substr(0,A.replace(/[?#].*/,"").lastIndexOf("/")+1):"",_||(l=t=>{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.send(null),e.responseText},w&&(p=t=>{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.responseType="arraybuffer",e.send(null),new Uint8Array(e.response)}),f=(t,e,n)=>{var r=new XMLHttpRequest;r.open("GET",t,!0),r.responseType="arraybuffer",r.onload=()=>{200==r.status||0==r.status&&r.response?e(r.response):n()},r.onerror=n,r.send(null)}));_&&"undefined"==typeof performance&&(n.g.performance=n(953).performance);var T=console.log.bind(console),E=console.warn.bind(console);_&&(y(),T=t=>h.writeSync(1,t+"\\n"),E=t=>h.writeSync(2,t+"\\n"));var M,C=u.print||T,x=u.printErr||E;Object.assign(u,b),b=null,u.thisProgram&&(m=u.thisProgram),u.quit&&(g=u.quit),u.wasmBinary&&(M=u.wasmBinary);var R=u.noExitRuntime||!1;"object"!=typeof WebAssembly&&at("no native wasm support detected");var j,k,D,P,U,F,I,W,H=!1,L="undefined"!=typeof TextDecoder?new TextDecoder("utf8"):void 0;function z(t,e,n){var r=(e>>>=0)+n;for(n=e;t[n]&&!(n>=r);)++n;if(16(a=224==(240&a)?(15&a)<<12|i<<6|o:(7&a)<<18|i<<12|o<<6|63&t[e++])?r+=String.fromCharCode(a):(a-=65536,r+=String.fromCharCode(55296|a>>10,56320|1023&a))}}else r+=String.fromCharCode(a)}return r}function Y(t,e){return(t>>>=0)?z(r(),t,e):""}function B(t,e,n,r){if(!(0>>=0;r=n+r-1;for(var i=0;i=o&&(o=65536+((1023&o)<<10)|1023&t.charCodeAt(++i)),127>=o){if(n>=r)break;e[n++>>>0]=o}else{if(2047>=o){if(n+1>=r)break;e[n++>>>0]=192|o>>6}else{if(65535>=o){if(n+2>=r)break;e[n++>>>0]=224|o>>12}else{if(n+3>=r)break;e[n++>>>0]=240|o>>18,e[n++>>>0]=128|o>>12&63}e[n++>>>0]=128|o>>6&63}e[n++>>>0]=128|63&o}}return e[n>>>0]=0,n-a}function G(t){for(var e=0,n=0;n=r?e++:2047>=r?e+=2:55296<=r&&57343>=r?(e+=4,++n):e+=3}return e}function N(t){D=t,u.HEAP8=P=new Int8Array(t),u.HEAP16=new Int16Array(t),u.HEAP32=F=new Int32Array(t),u.HEAPU8=U=new Uint8Array(t),u.HEAPU16=new Uint16Array(t),u.HEAPU32=I=new Uint32Array(t),u.HEAPF32=new Float32Array(t),u.HEAPF64=W=new Float64Array(t)}O&&(D=u.buffer);var V=u.INITIAL_MEMORY||16777216;if(O)j=u.wasmMemory,D=u.buffer;else if(u.wasmMemory)j=u.wasmMemory;else if(!((j=new WebAssembly.Memory({initial:V/65536,maximum:65536,shared:!0})).buffer instanceof SharedArrayBuffer))throw x("requested a shared WebAssembly.Memory but the returned buffer is not a SharedArrayBuffer, indicating that while the browser has SharedArrayBuffer it does not have WebAssembly threads support - you may need to set a flag"),_&&console.log("(on node you may need: --experimental-wasm-threads --experimental-wasm-bulk-memory and also use a recent version)"),Error("bad memory");j&&(D=j.buffer),V=D.byteLength,N(D);var $,q=[],X=[],J=[],Z=[];function Q(){return R||!1}function K(){var t=u.preRun.shift();q.unshift(t)}var tt,et=0,nt=null,rt=null;function at(t){throw O?postMessage({cmd:"onAbort",arg:t}):u.onAbort&&u.onAbort(t),x(t="Aborted("+t+")"),H=!0,t=new WebAssembly.RuntimeError(t+". Build with -sASSERTIONS for more info."),s(t),t}function it(){return tt.startsWith("data:application/octet-stream;base64,")}function ot(){var t=tt;try{if(t==tt&&M)return new Uint8Array(M);if(p)return p(t);throw"both async and sync fetching of the wasm failed"}catch(t){at(t)}}tt="ort-wasm-threaded.wasm",it()||(tt=S(tt));var ut={};function ct(t){this.name="ExitStatus",this.message="Program terminated with exit("+t+")",this.status=t}function st(t){(t=ht.Vb[t])||at(),ht.mc(t)}function lt(t){var e=ht.Cc();if(!e)return 6;ht.ac.push(e),ht.Vb[t.Ub]=e,e.Ub=t.Ub;var n={cmd:"run",start_routine:t.Ic,arg:t.zc,pthread_ptr:t.Ub};return e.$b=()=>{n.time=performance.now(),e.postMessage(n,t.Nc)},e.loaded&&(e.$b(),delete e.$b),0}function ft(t){if(O)return $t(1,1,t);Q()||(ht.oc(),u.onExit&&u.onExit(t),H=!0),g(t,new ct(t))}function pt(t,e){if(!e&&O)throw bt(t),"unwind";Q()||O||(me(),dt(J),be(0),re[1].length&&ae(1,10),re[2].length&&ae(2,10),ht.oc()),ft(t)}var ht={Yb:[],ac:[],qc:[],Vb:{},fc:function(){O&&ht.Ec()},Pc:function(){},Ec:function(){ht.receiveObjectTransfer=ht.Gc,ht.threadInitTLS=ht.pc,ht.setExitStatus=ht.nc,R=!1},nc:function(){},oc:function(){for(var t of Object.values(ht.Vb))ht.mc(t);for(t of ht.Yb)t.terminate();ht.Yb=[]},mc:function(t){var e=t.Ub;delete ht.Vb[e],ht.Yb.push(t),ht.ac.splice(ht.ac.indexOf(t),1),t.Ub=0,Oe(e)},Gc:function(){},pc:function(){ht.qc.forEach((t=>t()))},Fc:function(t,e){t.onmessage=n=>{var r=(n=n.data).cmd;if(t.Ub&&(ht.Bc=t.Ub),n.targetThread&&n.targetThread!=he()){var a=ht.Vb[n.Qc];a?a.postMessage(n,n.transferList):x(\'Internal error! Worker sent a message "\'+r+\'" to target pthread \'+n.targetThread+", but that thread no longer exists!")}else"processProxyingQueue"===r?zt(n.queue):"spawnThread"===r?lt(n):"cleanupThread"===r?st(n.thread):"killThread"===r?(n=n.thread,r=ht.Vb[n],delete ht.Vb[n],r.terminate(),Oe(n),ht.ac.splice(ht.ac.indexOf(r),1),r.Ub=0):"cancelThread"===r?ht.Vb[n.thread].postMessage({cmd:"cancel"}):"loaded"===r?(t.loaded=!0,e&&e(t),t.$b&&(t.$b(),delete t.$b)):"print"===r?C("Thread "+n.threadId+": "+n.text):"printErr"===r?x("Thread "+n.threadId+": "+n.text):"alert"===r?alert("Thread "+n.threadId+": "+n.text):"setimmediate"===n.target?t.postMessage(n):"onAbort"===r?u.onAbort&&u.onAbort(n.arg):r&&x("worker sent an unknown command "+r);ht.Bc=void 0},t.onerror=t=>{throw x("worker sent an error! "+t.filename+":"+t.lineno+": "+t.message),t},_&&(t.on("message",(function(e){t.onmessage({data:e})})),t.on("error",(function(e){t.onerror(e)})),t.on("detachedExit",(function(){}))),t.postMessage({cmd:"load",urlOrBlob:u.mainScriptUrlOrBlob||_scriptDir,wasmMemory:j,wasmModule:k})},yc:function(){var t=S("ort-wasm-threaded.worker.js");ht.Yb.push(new Worker(t))},Cc:function(){return 0==ht.Yb.length&&(ht.yc(),ht.Fc(ht.Yb[0])),ht.Yb.pop()}};function dt(t){for(;0>2>>>0];t=a()[t+48>>2>>>0],Te(e,e-t),Me(e)};var mt=[];function gt(t){var e=mt[t];return e||(t>=mt.length&&(mt.length=t+1),mt[t]=e=$.get(t)),e}u.invokeEntryPoint=function(t,e){t=gt(t)(e),Q()?ht.nc(t):Ae(t)};var vt,wt,_t=[],Ot=0,At=0;function St(t){this.Zb=t,this.Sb=t-24,this.xc=function(t){i()[this.Sb+4>>2>>>0]=t},this.bc=function(){return i()[this.Sb+4>>2>>>0]},this.wc=function(t){i()[this.Sb+8>>2>>>0]=t},this.Dc=function(){return i()[this.Sb+8>>2>>>0]},this.rc=function(){a()[this.Sb>>2>>>0]=0},this.hc=function(t){t=t?1:0,e()[this.Sb+12>>0>>>0]=t},this.uc=function(){return 0!=e()[this.Sb+12>>0>>>0]},this.ic=function(t){t=t?1:0,e()[this.Sb+13>>0>>>0]=t},this.kc=function(){return 0!=e()[this.Sb+13>>0>>>0]},this.fc=function(t,e){this.cc(0),this.xc(t),this.wc(e),this.rc(),this.hc(!1),this.ic(!1)},this.sc=function(){Atomics.add(a(),this.Sb>>2,1)},this.Hc=function(){return 1===Atomics.sub(a(),this.Sb>>2,1)},this.cc=function(t){i()[this.Sb+16>>2>>>0]=t},this.tc=function(){return i()[this.Sb+16>>2>>>0]},this.vc=function(){if(Re(this.bc()))return i()[this.Zb>>2>>>0];var t=this.tc();return 0!==t?t:this.Zb}}function Tt(t){return ye(new St(t).Sb)}function Et(t,e,n,r){return O?$t(3,1,t,e,n,r):Mt(t,e,n,r)}function Mt(t,e,n,r){if("undefined"==typeof SharedArrayBuffer)return x("Current environment does not support SharedArrayBuffer, pthreads are not available!"),6;var a=[];return O&&0===a.length?Et(t,e,n,r):(t={Ic:n,Ub:t,zc:r,Nc:a},O?(t.Oc="spawnThread",postMessage(t,a),0):lt(t))}function Ct(t,e,n){return O?$t(4,1,t,e,n):0}function xt(t,e){if(O)return $t(5,1,t,e)}function Rt(t,e){if(O)return $t(6,1,t,e)}function jt(t,e,n){if(O)return $t(7,1,t,e,n)}function kt(t,e,n){return O?$t(8,1,t,e,n):0}function Dt(t,e){if(O)return $t(9,1,t,e)}function Pt(t,e,n){if(O)return $t(10,1,t,e,n)}function Ut(t,e,n,r){if(O)return $t(11,1,t,e,n,r)}function Ft(t,e,n,r){if(O)return $t(12,1,t,e,n,r)}function It(t,e,n,r){if(O)return $t(13,1,t,e,n,r)}function Wt(t){if(O)return $t(14,1,t)}function Ht(t,e){if(O)return $t(15,1,t,e)}function Lt(t,e,n){if(O)return $t(16,1,t,e,n)}function zt(t){Atomics.store(a(),t>>2,1),he()&&_e(t),Atomics.compareExchange(a(),t>>2,1,0)}function Yt(t){return i()[t>>>2]+4294967296*a()[t+4>>>2]}function Bt(t,e,n,r,a,i){return O?$t(17,1,t,e,n,r,a,i):-52}function Gt(t,e,n,r,a,i){if(O)return $t(18,1,t,e,n,r,a,i)}function Nt(t){var n=G(t)+1,r=de(n);return r&&B(t,e(),r,n),r}function Vt(t,e,n){function r(t){return(t=t.toTimeString().match(/\\(([A-Za-z ]+)\\)$/))?t[1]:"GMT"}if(O)return $t(19,1,t,e,n);var o=(new Date).getFullYear(),u=new Date(o,0,1),c=new Date(o,6,1);o=u.getTimezoneOffset();var s=c.getTimezoneOffset(),l=Math.max(o,s);a()[t>>2>>>0]=60*l,a()[e>>2>>>0]=Number(o!=s),t=r(u),e=r(c),t=Nt(t),e=Nt(e),s>2>>>0]=t,i()[n+4>>2>>>0]=e):(i()[n>>2>>>0]=e,i()[n+4>>2>>>0]=t)}function $t(t,e){var n=arguments.length-2,r=arguments;return yt((()=>{for(var a=Ce(8*n),i=a>>3,u=0;u>>0]=c}return we(t,n,a,e)}))}u.executeNotifiedProxyingQueue=zt,wt=_?()=>{var t=process.hrtime();return 1e3*t[0]+t[1]/1e6}:O?()=>performance.now()-u.__performance_now_clock_drift:()=>performance.now();var qt,Xt=[],Jt={};function Zt(){if(!qt){var t,e={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:("object"==typeof navigator&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:m||"./this.program"};for(t in Jt)void 0===Jt[t]?delete e[t]:e[t]=Jt[t];var n=[];for(t in e)n.push(t+"="+e[t]);qt=n}return qt}function Qt(t,n){if(O)return $t(20,1,t,n);var r=0;return Zt().forEach((function(a,o){var u=n+r;for(o=i()[t+4*o>>2>>>0]=u,u=0;u>0>>>0]=a.charCodeAt(u);e()[o>>0>>>0]=0,r+=a.length+1})),0}function Kt(t,e){if(O)return $t(21,1,t,e);var n=Zt();i()[t>>2>>>0]=n.length;var r=0;return n.forEach((function(t){r+=t.length+1})),i()[e>>2>>>0]=r,0}function te(t){return O?$t(22,1,t):52}function ee(t,e,n,r){return O?$t(23,1,t,e,n,r):52}function ne(t,e,n,r,a){return O?$t(24,1,t,e,n,r,a):70}var re=[null,[],[]];function ae(t,e){var n=re[t];0===e||10===e?((1===t?C:x)(z(n,0)),n.length=0):n.push(e)}function ie(t,e,n,a){if(O)return $t(25,1,t,e,n,a);for(var o=0,u=0;u>2>>>0],s=i()[e+4>>2>>>0];e+=8;for(var l=0;l>>0]);o+=s}return i()[a>>2>>>0]=o,0}var oe=0;function ue(t){return 0==t%4&&(0!=t%100||0==t%400)}var ce=[31,29,31,30,31,30,31,31,30,31,30,31],se=[31,28,31,30,31,30,31,31,30,31,30,31];function le(t,n,r,i){function o(t,e,n){for(t="number"==typeof t?t.toString():t||"";t.lengtht?-1:0r-t.getDate())){t.setDate(t.getDate()+e);break}e-=r-t.getDate()+1,t.setDate(1),11>n?t.setMonth(n+1):(t.setMonth(0),t.setFullYear(t.getFullYear()+1))}return n=new Date(t.getFullYear()+1,0,4),e=s(new Date(t.getFullYear(),0,4)),n=s(n),0>=c(e,t)?0>=c(n,t)?t.getFullYear()+1:t.getFullYear():t.getFullYear()-1}var f=a()[i+40>>2>>>0];for(var p in i={Lc:a()[i>>2>>>0],Kc:a()[i+4>>2>>>0],dc:a()[i+8>>2>>>0],jc:a()[i+12>>2>>>0],ec:a()[i+16>>2>>>0],Xb:a()[i+20>>2>>>0],Tb:a()[i+24>>2>>>0],Wb:a()[i+28>>2>>>0],Rc:a()[i+32>>2>>>0],Jc:a()[i+36>>2>>>0],Mc:f?Y(f):""},r=Y(r),f={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})r=r.replace(new RegExp(p,"g"),f[p]);var h="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),d="January February March April May June July August September October November December".split(" ");for(p in f={"%a":function(t){return h[t.Tb].substring(0,3)},"%A":function(t){return h[t.Tb]},"%b":function(t){return d[t.ec].substring(0,3)},"%B":function(t){return d[t.ec]},"%C":function(t){return u((t.Xb+1900)/100|0,2)},"%d":function(t){return u(t.jc,2)},"%e":function(t){return o(t.jc,2," ")},"%g":function(t){return l(t).toString().substring(2)},"%G":function(t){return l(t)},"%H":function(t){return u(t.dc,2)},"%I":function(t){return 0==(t=t.dc)?t=12:12t.dc?"AM":"PM"},"%S":function(t){return u(t.Lc,2)},"%t":function(){return"\\t"},"%u":function(t){return t.Tb||7},"%U":function(t){return u(Math.floor((t.Wb+7-t.Tb)/7),2)},"%V":function(t){var e=Math.floor((t.Wb+7-(t.Tb+6)%7)/7);if(2>=(t.Tb+371-t.Wb-2)%7&&e++,e)53==e&&(4==(n=(t.Tb+371-t.Wb)%7)||3==n&&ue(t.Xb)||(e=1));else{e=52;var n=(t.Tb+7-t.Wb-1)%7;(4==n||5==n&&ue(t.Xb%400-1))&&e++}return u(e,2)},"%w":function(t){return t.Tb},"%W":function(t){return u(Math.floor((t.Wb+7-(t.Tb+6)%7)/7),2)},"%y":function(t){return(t.Xb+1900).toString().substring(2)},"%Y":function(t){return t.Xb+1900},"%z":function(t){var e=0<=(t=t.Jc);return t=Math.abs(t)/60,(e?"+":"-")+String("0000"+(t/60*100+t%60)).slice(-4)},"%Z":function(t){return t.Mc},"%%":function(){return"%"}},r=r.replace(/%%/g,"\\0\\0"),f)r.includes(p)&&(r=r.replace(new RegExp(p,"g"),f[p](i)));return p=function(t){var e=Array(G(t)+1);return B(t,e,0,e.length),e}(r=r.replace(/\\0\\0/g,"%")),p.length>n?0:(function(t,n){e().set(t,n>>>0)}(p,t),p.length-1)}ht.fc();var fe=[null,ft,bt,Et,Ct,xt,Rt,jt,kt,Dt,Pt,Ut,Ft,It,Wt,Ht,Lt,Bt,Gt,Vt,Qt,Kt,te,ee,ne,ie],pe={b:function(t){return de(t+24)+24},n:function(t){return(t=new St(t)).uc()||(t.hc(!0),Ot--),t.ic(!1),_t.push(t),t.sc(),t.vc()},ma:function(t){throw x("Unexpected exception thrown, this is not properly supported - aborting"),H=!0,t},x:function(){Se(0);var t=_t.pop();if(t.Hc()&&!t.kc()){var e=t.Dc();e&>(e)(t.Zb),Tt(t.Zb)}At=0},e:function(){var t=At;if(!t)return oe=0;var e=new St(t);e.cc(t);var n=e.bc();if(!n)return oe=0,t;for(var r=Array.prototype.slice.call(arguments),a=0;azt(r)));else if(O)postMessage({targetThread:t,cmd:"processProxyingQueue",queue:r});else{if(!(t=ht.Vb[t]))return;t.postMessage({cmd:"processProxyingQueue",queue:r})}return 1},Ea:function(){return-1},Pa:function(t,e){t=new Date(1e3*Yt(t)),a()[e>>2>>>0]=t.getUTCSeconds(),a()[e+4>>2>>>0]=t.getUTCMinutes(),a()[e+8>>2>>>0]=t.getUTCHours(),a()[e+12>>2>>>0]=t.getUTCDate(),a()[e+16>>2>>>0]=t.getUTCMonth(),a()[e+20>>2>>>0]=t.getUTCFullYear()-1900,a()[e+24>>2>>>0]=t.getUTCDay(),t=(t.getTime()-Date.UTC(t.getUTCFullYear(),0,1,0,0,0,0))/864e5|0,a()[e+28>>2>>>0]=t},Qa:function(t,e){t=new Date(1e3*Yt(t)),a()[e>>2>>>0]=t.getSeconds(),a()[e+4>>2>>>0]=t.getMinutes(),a()[e+8>>2>>>0]=t.getHours(),a()[e+12>>2>>>0]=t.getDate(),a()[e+16>>2>>>0]=t.getMonth(),a()[e+20>>2>>>0]=t.getFullYear()-1900,a()[e+24>>2>>>0]=t.getDay();var n=new Date(t.getFullYear(),0,1),r=(t.getTime()-n.getTime())/864e5|0;a()[e+28>>2>>>0]=r,a()[e+36>>2>>>0]=-60*t.getTimezoneOffset(),r=new Date(t.getFullYear(),6,1).getTimezoneOffset(),t=0|(r!=(n=n.getTimezoneOffset())&&t.getTimezoneOffset()==Math.min(n,r)),a()[e+32>>2>>>0]=t},Ra:function(t){var e=new Date(a()[t+20>>2>>>0]+1900,a()[t+16>>2>>>0],a()[t+12>>2>>>0],a()[t+8>>2>>>0],a()[t+4>>2>>>0],a()[t>>2>>>0],0),n=a()[t+32>>2>>>0],r=e.getTimezoneOffset(),i=new Date(e.getFullYear(),0,1),o=new Date(e.getFullYear(),6,1).getTimezoneOffset(),u=i.getTimezoneOffset(),c=Math.min(u,o);return 0>n?a()[t+32>>2>>>0]=Number(o!=u&&c==r):0>2>>>0]=e.getDay(),n=(e.getTime()-i.getTime())/864e5|0,a()[t+28>>2>>>0]=n,a()[t>>2>>>0]=e.getSeconds(),a()[t+4>>2>>>0]=e.getMinutes(),a()[t+8>>2>>>0]=e.getHours(),a()[t+12>>2>>>0]=e.getDate(),a()[t+16>>2>>>0]=e.getMonth(),e.getTime()/1e3|0},Aa:Bt,Ba:Gt,Sa:function t(e,n,r){t.Ac||(t.Ac=!0,Vt(e,n,r))},y:function(){at("")},U:function(){if(!_&&!w){var t="Blocking on the main thread is very dangerous, see https://emscripten.org/docs/porting/pthreads.html#blocking-on-the-main-browser-thread";vt||(vt={}),vt[t]||(vt[t]=1,_&&(t="warning: "+t),x(t))}},ra:function(){return 4294901760},B:wt,Ia:function(t,e,n){r().copyWithin(t>>>0,e>>>0,e+n>>>0)},F:function(){return _?n(993).cpus().length:navigator.hardwareConcurrency},Da:function(t,e,n){Xt.length=e,n>>=3;for(var r=0;r>>0];return(0>t?ut[-t-1]:fe[t]).apply(null,Xt)},qa:function(t){var e=r().length;if((t>>>=0)<=e||4294901760=n;n*=2){var a=e*(1+.2/n);a=Math.min(a,t+100663296);var i=Math;a=Math.max(t,a),i=i.min.call(i,4294901760,a+(65536-a%65536)%65536);t:{try{j.grow(i-D.byteLength+65535>>>16),N(j.buffer);var o=1;break t}catch(t){}o=void 0}if(o)return!0}return!1},Na:function(){throw"unwind"},Ga:Qt,Ha:Kt,J:pt,I:te,S:ee,ga:ne,R:ie,d:function(){return oe},na:function t(r,a){t.lc||(t.lc=function(){if("object"==typeof crypto&&"function"==typeof crypto.getRandomValues){var t=new Uint8Array(1);return()=>(crypto.getRandomValues(t),t[0])}if(_)try{var e=n(Object(function(){var t=new Error("Cannot find module \'crypto\'");throw t.code="MODULE_NOT_FOUND",t}()));return()=>e.randomBytes(1)[0]}catch(t){}return()=>at("randomDevice")}());for(var i=0;i>0>>>0]=t.lc();return 0},ia:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},ja:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},K:function(t){var e=Ee();try{return gt(t)()}catch(t){if(Me(e),t!==t+0)throw t;Se(1,0)}},f:function(t,e){var n=Ee();try{return gt(t)(e)}catch(t){if(Me(n),t!==t+0)throw t;Se(1,0)}},P:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},Q:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},k:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},p:function(t,e,n,r){var a=Ee();try{return gt(t)(e,n,r)}catch(t){if(Me(a),t!==t+0)throw t;Se(1,0)}},q:function(t,e,n,r,a){var i=Ee();try{return gt(t)(e,n,r,a)}catch(t){if(Me(i),t!==t+0)throw t;Se(1,0)}},N:function(t,e,n,r,a,i){var o=Ee();try{return gt(t)(e,n,r,a,i)}catch(t){if(Me(o),t!==t+0)throw t;Se(1,0)}},s:function(t,e,n,r,a,i){var o=Ee();try{return gt(t)(e,n,r,a,i)}catch(t){if(Me(o),t!==t+0)throw t;Se(1,0)}},w:function(t,e,n,r,a,i,o){var u=Ee();try{return gt(t)(e,n,r,a,i,o)}catch(t){if(Me(u),t!==t+0)throw t;Se(1,0)}},L:function(t,e,n,r,a,i,o,u){var c=Ee();try{return gt(t)(e,n,r,a,i,o,u)}catch(t){if(Me(c),t!==t+0)throw t;Se(1,0)}},E:function(t,e,n,r,a,i,o,u,c,s,l,f){var p=Ee();try{return gt(t)(e,n,r,a,i,o,u,c,s,l,f)}catch(t){if(Me(p),t!==t+0)throw t;Se(1,0)}},aa:function(t,e,n,r,a,i,o,u){var c=Ee();try{return He(t,e,n,r,a,i,o,u)}catch(t){if(Me(c),t!==t+0)throw t;Se(1,0)}},_:function(t,e,n,r,a,i,o){var u=Ee();try{return ke(t,e,n,r,a,i,o)}catch(t){if(Me(u),t!==t+0)throw t;Se(1,0)}},Z:function(t,e,n,r,a){var i=Ee();try{return Le(t,e,n,r,a)}catch(t){if(Me(i),t!==t+0)throw t;Se(1,0)}},ca:function(t,e,n,r){var a=Ee();try{return Ie(t,e,n,r)}catch(t){if(Me(a),t!==t+0)throw t;Se(1,0)}},$:function(t){var e=Ee();try{return je(t)}catch(t){if(Me(e),t!==t+0)throw t;Se(1,0)}},ba:function(t,e){var n=Ee();try{return We(t,e)}catch(t){if(Me(n),t!==t+0)throw t;Se(1,0)}},Y:function(t,e,n){var r=Ee();try{return De(t,e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},g:function(t){var e=Ee();try{gt(t)()}catch(t){if(Me(e),t!==t+0)throw t;Se(1,0)}},r:function(t,e){var n=Ee();try{gt(t)(e)}catch(t){if(Me(n),t!==t+0)throw t;Se(1,0)}},i:function(t,e,n){var r=Ee();try{gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},ha:function(t,e,n,r){var a=Ee();try{gt(t)(e,n,r)}catch(t){if(Me(a),t!==t+0)throw t;Se(1,0)}},m:function(t,e,n,r){var a=Ee();try{gt(t)(e,n,r)}catch(t){if(Me(a),t!==t+0)throw t;Se(1,0)}},v:function(t,e,n,r,a){var i=Ee();try{gt(t)(e,n,r,a)}catch(t){if(Me(i),t!==t+0)throw t;Se(1,0)}},u:function(t,e,n,r,a,i){var o=Ee();try{gt(t)(e,n,r,a,i)}catch(t){if(Me(o),t!==t+0)throw t;Se(1,0)}},O:function(t,e,n,r,a,i,o){var u=Ee();try{gt(t)(e,n,r,a,i,o)}catch(t){if(Me(u),t!==t+0)throw t;Se(1,0)}},A:function(t,e,n,r,a,i,o,u){var c=Ee();try{gt(t)(e,n,r,a,i,o,u)}catch(t){if(Me(c),t!==t+0)throw t;Se(1,0)}},ka:function(t,e,n,r,a,i,o,u,c){var s=Ee();try{gt(t)(e,n,r,a,i,o,u,c)}catch(t){if(Me(s),t!==t+0)throw t;Se(1,0)}},C:function(t,e,n,r,a,i,o,u,c,s,l){var f=Ee();try{gt(t)(e,n,r,a,i,o,u,c,s,l)}catch(t){if(Me(f),t!==t+0)throw t;Se(1,0)}},D:function(t,e,n,r,a,i,o,u,c,s,l,f,p,h,d,y){var b=Ee();try{gt(t)(e,n,r,a,i,o,u,c,s,l,f,p,h,d,y)}catch(t){if(Me(b),t!==t+0)throw t;Se(1,0)}},fa:function(t,e,n,r,a,i,o,u){var c=Ee();try{Pe(t,e,n,r,a,i,o,u)}catch(t){if(Me(c),t!==t+0)throw t;Se(1,0)}},da:function(t,e,n,r,a,i,o,u,c,s,l,f){var p=Ee();try{Fe(t,e,n,r,a,i,o,u,c,s,l,f)}catch(t){if(Me(p),t!==t+0)throw t;Se(1,0)}},ea:function(t,e,n,r,a,i){var o=Ee();try{Ue(t,e,n,r,a,i)}catch(t){if(Me(o),t!==t+0)throw t;Se(1,0)}},o:function(t){return t},a:j||u.wasmMemory,G:function(t){oe=t},la:le,z:function(t,e,n,r){return le(t,e,n,r)}};!function(){function t(t,e){u.asm=t.exports,ht.qc.push(u.asm.sb),$=u.asm.ub,X.unshift(u.asm.Va),k=e,O||(et--,u.monitorRunDependencies&&u.monitorRunDependencies(et),0==et&&(null!==nt&&(clearInterval(nt),nt=null),rt&&(t=rt,rt=null,t())))}function e(e){t(e.instance,e.module)}function n(t){return function(){if(!M&&(v||w)){if("function"==typeof fetch&&!tt.startsWith("file://"))return fetch(tt,{credentials:"same-origin"}).then((function(t){if(!t.ok)throw"failed to load wasm binary file at \'"+tt+"\'";return t.arrayBuffer()})).catch((function(){return ot()}));if(f)return new Promise((function(t,e){f(tt,(function(e){t(new Uint8Array(e))}),e)}))}return Promise.resolve().then((function(){return ot()}))}().then((function(t){return WebAssembly.instantiate(t,r)})).then((function(t){return t})).then(t,(function(t){x("failed to asynchronously prepare wasm: "+t),at(t)}))}var r={a:pe};if(O||(et++,u.monitorRunDependencies&&u.monitorRunDependencies(et)),u.instantiateWasm)try{return u.instantiateWasm(r,t)}catch(t){return x("Module.instantiateWasm callback failed with error: "+t),!1}(M||"function"!=typeof WebAssembly.instantiateStreaming||it()||tt.startsWith("file://")||_||"function"!=typeof fetch?n(e):fetch(tt,{credentials:"same-origin"}).then((function(t){return WebAssembly.instantiateStreaming(t,r).then(e,(function(t){return x("wasm streaming compile failed: "+t),x("falling back to ArrayBuffer instantiation"),n(e)}))}))).catch(s)}(),u.___wasm_call_ctors=function(){return(u.___wasm_call_ctors=u.asm.Va).apply(null,arguments)},u._OrtInit=function(){return(u._OrtInit=u.asm.Wa).apply(null,arguments)},u._OrtCreateSessionOptions=function(){return(u._OrtCreateSessionOptions=u.asm.Xa).apply(null,arguments)},u._OrtAppendExecutionProvider=function(){return(u._OrtAppendExecutionProvider=u.asm.Ya).apply(null,arguments)},u._OrtAddSessionConfigEntry=function(){return(u._OrtAddSessionConfigEntry=u.asm.Za).apply(null,arguments)},u._OrtReleaseSessionOptions=function(){return(u._OrtReleaseSessionOptions=u.asm._a).apply(null,arguments)},u._OrtCreateSession=function(){return(u._OrtCreateSession=u.asm.$a).apply(null,arguments)},u._OrtReleaseSession=function(){return(u._OrtReleaseSession=u.asm.ab).apply(null,arguments)},u._OrtGetInputCount=function(){return(u._OrtGetInputCount=u.asm.bb).apply(null,arguments)},u._OrtGetOutputCount=function(){return(u._OrtGetOutputCount=u.asm.cb).apply(null,arguments)},u._OrtGetInputName=function(){return(u._OrtGetInputName=u.asm.db).apply(null,arguments)},u._OrtGetOutputName=function(){return(u._OrtGetOutputName=u.asm.eb).apply(null,arguments)},u._OrtFree=function(){return(u._OrtFree=u.asm.fb).apply(null,arguments)},u._OrtCreateTensor=function(){return(u._OrtCreateTensor=u.asm.gb).apply(null,arguments)},u._OrtGetTensorData=function(){return(u._OrtGetTensorData=u.asm.hb).apply(null,arguments)},u._OrtReleaseTensor=function(){return(u._OrtReleaseTensor=u.asm.ib).apply(null,arguments)},u._OrtCreateRunOptions=function(){return(u._OrtCreateRunOptions=u.asm.jb).apply(null,arguments)},u._OrtAddRunConfigEntry=function(){return(u._OrtAddRunConfigEntry=u.asm.kb).apply(null,arguments)},u._OrtReleaseRunOptions=function(){return(u._OrtReleaseRunOptions=u.asm.lb).apply(null,arguments)},u._OrtRun=function(){return(u._OrtRun=u.asm.mb).apply(null,arguments)},u._OrtEndProfiling=function(){return(u._OrtEndProfiling=u.asm.nb).apply(null,arguments)};var he=u._pthread_self=function(){return(he=u._pthread_self=u.asm.ob).apply(null,arguments)},de=u._malloc=function(){return(de=u._malloc=u.asm.pb).apply(null,arguments)},ye=u._free=function(){return(ye=u._free=u.asm.qb).apply(null,arguments)},be=u._fflush=function(){return(be=u._fflush=u.asm.rb).apply(null,arguments)};u.__emscripten_tls_init=function(){return(u.__emscripten_tls_init=u.asm.sb).apply(null,arguments)};var me=u.___funcs_on_exit=function(){return(me=u.___funcs_on_exit=u.asm.tb).apply(null,arguments)},ge=u.__emscripten_thread_init=function(){return(ge=u.__emscripten_thread_init=u.asm.vb).apply(null,arguments)};u.__emscripten_thread_crashed=function(){return(u.__emscripten_thread_crashed=u.asm.wb).apply(null,arguments)};var ve,we=u._emscripten_run_in_main_runtime_thread_js=function(){return(we=u._emscripten_run_in_main_runtime_thread_js=u.asm.xb).apply(null,arguments)},_e=u.__emscripten_proxy_execute_task_queue=function(){return(_e=u.__emscripten_proxy_execute_task_queue=u.asm.yb).apply(null,arguments)},Oe=u.__emscripten_thread_free_data=function(){return(Oe=u.__emscripten_thread_free_data=u.asm.zb).apply(null,arguments)},Ae=u.__emscripten_thread_exit=function(){return(Ae=u.__emscripten_thread_exit=u.asm.Ab).apply(null,arguments)},Se=u._setThrew=function(){return(Se=u._setThrew=u.asm.Bb).apply(null,arguments)},Te=u._emscripten_stack_set_limits=function(){return(Te=u._emscripten_stack_set_limits=u.asm.Cb).apply(null,arguments)},Ee=u.stackSave=function(){return(Ee=u.stackSave=u.asm.Db).apply(null,arguments)},Me=u.stackRestore=function(){return(Me=u.stackRestore=u.asm.Eb).apply(null,arguments)},Ce=u.stackAlloc=function(){return(Ce=u.stackAlloc=u.asm.Fb).apply(null,arguments)},xe=u.___cxa_can_catch=function(){return(xe=u.___cxa_can_catch=u.asm.Gb).apply(null,arguments)},Re=u.___cxa_is_pointer_type=function(){return(Re=u.___cxa_is_pointer_type=u.asm.Hb).apply(null,arguments)},je=u.dynCall_j=function(){return(je=u.dynCall_j=u.asm.Ib).apply(null,arguments)},ke=u.dynCall_iiiiij=function(){return(ke=u.dynCall_iiiiij=u.asm.Jb).apply(null,arguments)},De=u.dynCall_jii=function(){return(De=u.dynCall_jii=u.asm.Kb).apply(null,arguments)},Pe=u.dynCall_viiiiij=function(){return(Pe=u.dynCall_viiiiij=u.asm.Lb).apply(null,arguments)},Ue=u.dynCall_vjji=function(){return(Ue=u.dynCall_vjji=u.asm.Mb).apply(null,arguments)},Fe=u.dynCall_viiijjjii=function(){return(Fe=u.dynCall_viiijjjii=u.asm.Nb).apply(null,arguments)},Ie=u.dynCall_iij=function(){return(Ie=u.dynCall_iij=u.asm.Ob).apply(null,arguments)},We=u.dynCall_ji=function(){return(We=u.dynCall_ji=u.asm.Pb).apply(null,arguments)},He=u.dynCall_iiiiiij=function(){return(He=u.dynCall_iiiiiij=u.asm.Qb).apply(null,arguments)},Le=u.dynCall_iiij=function(){return(Le=u.dynCall_iiij=u.asm.Rb).apply(null,arguments)};function ze(){function t(){if(!ve&&(ve=!0,u.calledRun=!0,!H)&&(O||dt(X),c(u),u.onRuntimeInitialized&&u.onRuntimeInitialized(),!O)){if(u.postRun)for("function"==typeof u.postRun&&(u.postRun=[u.postRun]);u.postRun.length;){var t=u.postRun.shift();Z.unshift(t)}dt(Z)}}if(!(0{var _scriptDir,r=(_scriptDir=(_scriptDir="undefined"!=typeof document&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(t){var e,r,a;t=t||{},e||(e=void 0!==t?t:{}),e.ready=new Promise((function(t,e){r=t,a=e}));var i,o,u,c,s,l,f=Object.assign({},e),p="./this.program",h=(t,e)=>{throw e},d="object"==typeof window,y="function"==typeof importScripts,b="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node,m="";b?(m=y?n(908).dirname(m)+"/":"//",l=()=>{s||(c=n(384),s=n(908))},i=function(t,e){return l(),t=s.normalize(t),c.readFileSync(t,e?void 0:"utf8")},u=t=>((t=i(t,!0)).buffer||(t=new Uint8Array(t)),t),o=(t,e,n)=>{l(),t=s.normalize(t),c.readFile(t,(function(t,r){t?n(t):e(r.buffer)}))},1{if(_||0{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.send(null),e.responseText},y&&(u=t=>{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.responseType="arraybuffer",e.send(null),new Uint8Array(e.response)}),o=(t,e,n)=>{var r=new XMLHttpRequest;r.open("GET",t,!0),r.responseType="arraybuffer",r.onload=()=>{200==r.status||0==r.status&&r.response?e(r.response):n()},r.onerror=n,r.send(null)});var g,v=e.print||console.log.bind(console),w=e.printErr||console.warn.bind(console);Object.assign(e,f),f=null,e.thisProgram&&(p=e.thisProgram),e.quit&&(h=e.quit),e.wasmBinary&&(g=e.wasmBinary);var _=e.noExitRuntime||!1;"object"!=typeof WebAssembly&&V("no native wasm support detected");var O,A,S,T,E,M,C=!1,x="undefined"!=typeof TextDecoder?new TextDecoder("utf8"):void 0;function R(t,e,n){var r=(e>>>=0)+n;for(n=e;t[n]&&!(n>=r);)++n;if(16(a=224==(240&a)?(15&a)<<12|i<<6|o:(7&a)<<18|i<<12|o<<6|63&t[e++])?r+=String.fromCharCode(a):(a-=65536,r+=String.fromCharCode(55296|a>>10,56320|1023&a))}}else r+=String.fromCharCode(a)}return r}function j(t,e){return(t>>>=0)?R(T,t,e):""}function k(t,e,n,r){if(!(0>>=0;r=n+r-1;for(var i=0;i=o&&(o=65536+((1023&o)<<10)|1023&t.charCodeAt(++i)),127>=o){if(n>=r)break;e[n++>>>0]=o}else{if(2047>=o){if(n+1>=r)break;e[n++>>>0]=192|o>>6}else{if(65535>=o){if(n+2>=r)break;e[n++>>>0]=224|o>>12}else{if(n+3>=r)break;e[n++>>>0]=240|o>>18,e[n++>>>0]=128|o>>12&63}e[n++>>>0]=128|o>>6&63}e[n++>>>0]=128|63&o}}return e[n>>>0]=0,n-a}function D(t){for(var e=0,n=0;n=r?e++:2047>=r?e+=2:55296<=r&&57343>=r?(e+=4,++n):e+=3}return e}function P(){var t=O.buffer;A=t,e.HEAP8=S=new Int8Array(t),e.HEAP16=new Int16Array(t),e.HEAP32=E=new Int32Array(t),e.HEAPU8=T=new Uint8Array(t),e.HEAPU16=new Uint16Array(t),e.HEAPU32=M=new Uint32Array(t),e.HEAPF32=new Float32Array(t),e.HEAPF64=new Float64Array(t)}var U,F=[],I=[],W=[],H=[],L=0;function z(){var t=e.preRun.shift();F.unshift(t)}var Y,B=0,G=null,N=null;function V(t){throw e.onAbort&&e.onAbort(t),w(t="Aborted("+t+")"),C=!0,t=new WebAssembly.RuntimeError(t+". Build with -sASSERTIONS for more info."),a(t),t}function $(){return Y.startsWith("data:application/octet-stream;base64,")}if(Y="ort-wasm.wasm",!$()){var q=Y;Y=e.locateFile?e.locateFile(q,m):m+q}function X(){var t=Y;try{if(t==Y&&g)return new Uint8Array(g);if(u)return u(t);throw"both async and sync fetching of the wasm failed"}catch(t){V(t)}}function J(t){this.name="ExitStatus",this.message="Program terminated with exit("+t+")",this.status=t}function Z(t){for(;0>2>>>0]=t},this.Eb=function(){return M[this.zb+4>>2>>>0]},this.Sb=function(t){M[this.zb+8>>2>>>0]=t},this.Wb=function(){return M[this.zb+8>>2>>>0]},this.Tb=function(){E[this.zb>>2>>>0]=0},this.Ib=function(t){S[this.zb+12>>0>>>0]=t?1:0},this.Pb=function(){return 0!=S[this.zb+12>>0>>>0]},this.Jb=function(t){S[this.zb+13>>0>>>0]=t?1:0},this.Lb=function(){return 0!=S[this.zb+13>>0>>>0]},this.Rb=function(t,e){this.Fb(0),this.Ub(t),this.Sb(e),this.Tb(),this.Ib(!1),this.Jb(!1)},this.Nb=function(){E[this.zb>>2>>>0]+=1},this.Xb=function(){var t=E[this.zb>>2>>>0];return E[this.zb>>2>>>0]=t-1,1===t},this.Fb=function(t){M[this.zb+16>>2>>>0]=t},this.Ob=function(){return M[this.zb+16>>2>>>0]},this.Qb=function(){if(Mt(this.Eb()))return M[this.Db>>2>>>0];var t=this.Ob();return 0!==t?t:this.Db}}function nt(t){return vt(new et(t).zb)}var rt=[];function at(t){var e=rt[t];return e||(t>=rt.length&&(rt.length=t+1),rt[t]=e=U.get(t)),e}function it(t){var e=D(t)+1,n=gt(e);return n&&k(t,S,n,e),n}var ot={};function ut(){if(!ct){var t,e={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:("object"==typeof navigator&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:p||"./this.program"};for(t in ot)void 0===ot[t]?delete e[t]:e[t]=ot[t];var n=[];for(t in e)n.push(t+"="+e[t]);ct=n}return ct}var ct,st=[null,[],[]];function lt(t,e){var n=st[t];0===e||10===e?((1===t?v:w)(R(n,0)),n.length=0):n.push(e)}var ft=0;function pt(t){return 0==t%4&&(0!=t%100||0==t%400)}var ht=[31,29,31,30,31,30,31,31,30,31,30,31],dt=[31,28,31,30,31,30,31,31,30,31,30,31];function yt(t,e,n,r){function a(t,e,n){for(t="number"==typeof t?t.toString():t||"";t.lengtht?-1:0r-t.getDate())){t.setDate(t.getDate()+e);break}e-=r-t.getDate()+1,t.setDate(1),11>n?t.setMonth(n+1):(t.setMonth(0),t.setFullYear(t.getFullYear()+1))}return n=new Date(t.getFullYear()+1,0,4),e=u(new Date(t.getFullYear(),0,4)),n=u(n),0>=o(e,t)?0>=o(n,t)?t.getFullYear()+1:t.getFullYear():t.getFullYear()-1}var s=E[r+40>>2>>>0];for(var l in r={$b:E[r>>2>>>0],Zb:E[r+4>>2>>>0],Gb:E[r+8>>2>>>0],Kb:E[r+12>>2>>>0],Hb:E[r+16>>2>>>0],Cb:E[r+20>>2>>>0],Ab:E[r+24>>2>>>0],Bb:E[r+28>>2>>>0],bc:E[r+32>>2>>>0],Yb:E[r+36>>2>>>0],ac:s?j(s):""},n=j(n),s={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})n=n.replace(new RegExp(l,"g"),s[l]);var f="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),p="January February March April May June July August September October November December".split(" ");for(l in s={"%a":function(t){return f[t.Ab].substring(0,3)},"%A":function(t){return f[t.Ab]},"%b":function(t){return p[t.Hb].substring(0,3)},"%B":function(t){return p[t.Hb]},"%C":function(t){return i((t.Cb+1900)/100|0,2)},"%d":function(t){return i(t.Kb,2)},"%e":function(t){return a(t.Kb,2," ")},"%g":function(t){return c(t).toString().substring(2)},"%G":function(t){return c(t)},"%H":function(t){return i(t.Gb,2)},"%I":function(t){return 0==(t=t.Gb)?t=12:12t.Gb?"AM":"PM"},"%S":function(t){return i(t.$b,2)},"%t":function(){return"\\t"},"%u":function(t){return t.Ab||7},"%U":function(t){return i(Math.floor((t.Bb+7-t.Ab)/7),2)},"%V":function(t){var e=Math.floor((t.Bb+7-(t.Ab+6)%7)/7);if(2>=(t.Ab+371-t.Bb-2)%7&&e++,e)53==e&&(4==(n=(t.Ab+371-t.Bb)%7)||3==n&&pt(t.Cb)||(e=1));else{e=52;var n=(t.Ab+7-t.Bb-1)%7;(4==n||5==n&&pt(t.Cb%400-1))&&e++}return i(e,2)},"%w":function(t){return t.Ab},"%W":function(t){return i(Math.floor((t.Bb+7-(t.Ab+6)%7)/7),2)},"%y":function(t){return(t.Cb+1900).toString().substring(2)},"%Y":function(t){return t.Cb+1900},"%z":function(t){var e=0<=(t=t.Yb);return t=Math.abs(t)/60,(e?"+":"-")+String("0000"+(t/60*100+t%60)).slice(-4)},"%Z":function(t){return t.ac},"%%":function(){return"%"}},n=n.replace(/%%/g,"\\0\\0"),s)n.includes(l)&&(n=n.replace(new RegExp(l,"g"),s[l](r)));return l=function(t){var e=Array(D(t)+1);return k(t,e,0,e.length),e}(n=n.replace(/\\0\\0/g,"%")),l.length>e?0:(S.set(l,t>>>0),l.length-1)}var bt={a:function(t){return gt(t+24)+24},m:function(t){return(t=new et(t)).Pb()||(t.Ib(!0),K--),t.Jb(!1),Q.push(t),t.Nb(),t.Qb()},ia:function(t){throw w("Unexpected exception thrown, this is not properly supported - aborting"),C=!0,t},w:function(){Ot(0);var t=Q.pop();if(t.Xb()&&!t.Lb()){var e=t.Wb();e&&at(e)(t.Db),nt(t.Db)}tt=0},d:function(){var t=tt;if(!t)return ft=0;var e=new et(t);e.Fb(t);var n=e.Eb();if(!n)return ft=0,t;for(var r=Array.prototype.slice.call(arguments),a=0;a>>2]+4294967296*E[t+4>>>2])),E[e>>2>>>0]=t.getUTCSeconds(),E[e+4>>2>>>0]=t.getUTCMinutes(),E[e+8>>2>>>0]=t.getUTCHours(),E[e+12>>2>>>0]=t.getUTCDate(),E[e+16>>2>>>0]=t.getUTCMonth(),E[e+20>>2>>>0]=t.getUTCFullYear()-1900,E[e+24>>2>>>0]=t.getUTCDay(),E[e+28>>2>>>0]=(t.getTime()-Date.UTC(t.getUTCFullYear(),0,1,0,0,0,0))/864e5|0},Ea:function(t,e){t=new Date(1e3*(M[t>>>2]+4294967296*E[t+4>>>2])),E[e>>2>>>0]=t.getSeconds(),E[e+4>>2>>>0]=t.getMinutes(),E[e+8>>2>>>0]=t.getHours(),E[e+12>>2>>>0]=t.getDate(),E[e+16>>2>>>0]=t.getMonth(),E[e+20>>2>>>0]=t.getFullYear()-1900,E[e+24>>2>>>0]=t.getDay();var n=new Date(t.getFullYear(),0,1);E[e+28>>2>>>0]=(t.getTime()-n.getTime())/864e5|0,E[e+36>>2>>>0]=-60*t.getTimezoneOffset();var r=new Date(t.getFullYear(),6,1).getTimezoneOffset();n=n.getTimezoneOffset(),E[e+32>>2>>>0]=0|(r!=n&&t.getTimezoneOffset()==Math.min(n,r))},Fa:function(t){var e=new Date(E[t+20>>2>>>0]+1900,E[t+16>>2>>>0],E[t+12>>2>>>0],E[t+8>>2>>>0],E[t+4>>2>>>0],E[t>>2>>>0],0),n=E[t+32>>2>>>0],r=e.getTimezoneOffset(),a=new Date(e.getFullYear(),0,1),i=new Date(e.getFullYear(),6,1).getTimezoneOffset(),o=a.getTimezoneOffset(),u=Math.min(o,i);return 0>n?E[t+32>>2>>>0]=Number(i!=o&&u==r):0>2>>>0]=e.getDay(),E[t+28>>2>>>0]=(e.getTime()-a.getTime())/864e5|0,E[t>>2>>>0]=e.getSeconds(),E[t+4>>2>>>0]=e.getMinutes(),E[t+8>>2>>>0]=e.getHours(),E[t+12>>2>>>0]=e.getDate(),E[t+16>>2>>>0]=e.getMonth(),e.getTime()/1e3|0},sa:function(){return-52},ta:function(){},Ga:function t(e,n,r){t.Vb||(t.Vb=!0,function(t,e,n){function r(t){return(t=t.toTimeString().match(/\\(([A-Za-z ]+)\\)$/))?t[1]:"GMT"}var a=(new Date).getFullYear(),i=new Date(a,0,1),o=new Date(a,6,1);a=i.getTimezoneOffset();var u=o.getTimezoneOffset();E[t>>2>>>0]=60*Math.max(a,u),E[e>>2>>>0]=Number(a!=u),t=r(i),e=r(o),t=it(t),e=it(e),u>2>>>0]=t,M[n+4>>2>>>0]=e):(M[n>>2>>>0]=e,M[n+4>>2>>>0]=t)}(e,n,r))},B:function(){V("")},ma:function(){return 4294901760},I:b?()=>{var t=process.hrtime();return 1e3*t[0]+t[1]/1e6}:()=>performance.now(),xa:function(t,e,n){T.copyWithin(t>>>0,e>>>0,e+n>>>0)},G:function(t){var e=T.length;if(4294901760<(t>>>=0))return!1;for(var n=1;4>=n;n*=2){var r=e*(1+.2/n);r=Math.min(r,t+100663296);var a=Math;r=Math.max(t,r),a=a.min.call(a,4294901760,r+(65536-r%65536)%65536);t:{try{O.grow(a-A.byteLength+65535>>>16),P();var i=1;break t}catch(t){}i=void 0}if(i)return!0}return!1},va:function(t,e){var n=0;return ut().forEach((function(r,a){var i=e+n;for(a=M[t+4*a>>2>>>0]=i,i=0;i>0>>>0]=r.charCodeAt(i);S[a>>0>>>0]=0,n+=r.length+1})),0},wa:function(t,e){var n=ut();M[t>>2>>>0]=n.length;var r=0;return n.forEach((function(t){r+=t.length+1})),M[e>>2>>>0]=r,0},ba:function(t){_||0>2>>>0],u=M[e+4>>2>>>0];e+=8;for(var c=0;c>>0]);a+=u}return M[r>>2>>>0]=a,0},c:function(){return ft},ja:function t(e,r){t.Mb||(t.Mb=function(){if("object"==typeof crypto&&"function"==typeof crypto.getRandomValues){var t=new Uint8Array(1);return()=>(crypto.getRandomValues(t),t[0])}if(b)try{var e=n(Object(function(){var t=new Error("Cannot find module \'crypto\'");throw t.code="MODULE_NOT_FOUND",t}()));return()=>e.randomBytes(1)[0]}catch(t){}return()=>V("randomDevice")}());for(var a=0;a>0>>>0]=t.Mb();return 0},ea:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},fa:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},J:function(t){var e=At();try{return at(t)()}catch(t){if(St(e),t!==t+0)throw t;Ot(1,0)}},e:function(t,e){var n=At();try{return at(t)(e)}catch(t){if(St(n),t!==t+0)throw t;Ot(1,0)}},N:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},O:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},j:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},o:function(t,e,n,r){var a=At();try{return at(t)(e,n,r)}catch(t){if(St(a),t!==t+0)throw t;Ot(1,0)}},p:function(t,e,n,r,a){var i=At();try{return at(t)(e,n,r,a)}catch(t){if(St(i),t!==t+0)throw t;Ot(1,0)}},M:function(t,e,n,r,a,i){var o=At();try{return at(t)(e,n,r,a,i)}catch(t){if(St(o),t!==t+0)throw t;Ot(1,0)}},r:function(t,e,n,r,a,i){var o=At();try{return at(t)(e,n,r,a,i)}catch(t){if(St(o),t!==t+0)throw t;Ot(1,0)}},v:function(t,e,n,r,a,i,o){var u=At();try{return at(t)(e,n,r,a,i,o)}catch(t){if(St(u),t!==t+0)throw t;Ot(1,0)}},K:function(t,e,n,r,a,i,o,u){var c=At();try{return at(t)(e,n,r,a,i,o,u)}catch(t){if(St(c),t!==t+0)throw t;Ot(1,0)}},D:function(t,e,n,r,a,i,o,u,c,s,l,f){var p=At();try{return at(t)(e,n,r,a,i,o,u,c,s,l,f)}catch(t){if(St(p),t!==t+0)throw t;Ot(1,0)}},X:function(t,e,n,r,a,i,o,u){var c=At();try{return Ft(t,e,n,r,a,i,o,u)}catch(t){if(St(c),t!==t+0)throw t;Ot(1,0)}},V:function(t,e,n,r,a,i,o){var u=At();try{return xt(t,e,n,r,a,i,o)}catch(t){if(St(u),t!==t+0)throw t;Ot(1,0)}},U:function(t,e,n,r,a){var i=At();try{return It(t,e,n,r,a)}catch(t){if(St(i),t!==t+0)throw t;Ot(1,0)}},Z:function(t,e,n,r){var a=At();try{return Pt(t,e,n,r)}catch(t){if(St(a),t!==t+0)throw t;Ot(1,0)}},W:function(t){var e=At();try{return Ct(t)}catch(t){if(St(e),t!==t+0)throw t;Ot(1,0)}},Y:function(t,e){var n=At();try{return Ut(t,e)}catch(t){if(St(n),t!==t+0)throw t;Ot(1,0)}},T:function(t,e,n){var r=At();try{return Rt(t,e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},f:function(t){var e=At();try{at(t)()}catch(t){if(St(e),t!==t+0)throw t;Ot(1,0)}},q:function(t,e){var n=At();try{at(t)(e)}catch(t){if(St(n),t!==t+0)throw t;Ot(1,0)}},h:function(t,e,n){var r=At();try{at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},da:function(t,e,n,r){var a=At();try{at(t)(e,n,r)}catch(t){if(St(a),t!==t+0)throw t;Ot(1,0)}},l:function(t,e,n,r){var a=At();try{at(t)(e,n,r)}catch(t){if(St(a),t!==t+0)throw t;Ot(1,0)}},t:function(t,e,n,r,a){var i=At();try{at(t)(e,n,r,a)}catch(t){if(St(i),t!==t+0)throw t;Ot(1,0)}},u:function(t,e,n,r,a,i){var o=At();try{at(t)(e,n,r,a,i)}catch(t){if(St(o),t!==t+0)throw t;Ot(1,0)}},x:function(t,e,n,r,a,i,o){var u=At();try{at(t)(e,n,r,a,i,o)}catch(t){if(St(u),t!==t+0)throw t;Ot(1,0)}},z:function(t,e,n,r,a,i,o,u){var c=At();try{at(t)(e,n,r,a,i,o,u)}catch(t){if(St(c),t!==t+0)throw t;Ot(1,0)}},ga:function(t,e,n,r,a,i,o,u,c){var s=At();try{at(t)(e,n,r,a,i,o,u,c)}catch(t){if(St(s),t!==t+0)throw t;Ot(1,0)}},A:function(t,e,n,r,a,i,o,u,c,s,l){var f=At();try{at(t)(e,n,r,a,i,o,u,c,s,l)}catch(t){if(St(f),t!==t+0)throw t;Ot(1,0)}},C:function(t,e,n,r,a,i,o,u,c,s,l,f,p,h,d,y){var b=At();try{at(t)(e,n,r,a,i,o,u,c,s,l,f,p,h,d,y)}catch(t){if(St(b),t!==t+0)throw t;Ot(1,0)}},aa:function(t,e,n,r,a,i,o,u){var c=At();try{jt(t,e,n,r,a,i,o,u)}catch(t){if(St(c),t!==t+0)throw t;Ot(1,0)}},_:function(t,e,n,r,a,i,o,u,c,s,l,f){var p=At();try{Dt(t,e,n,r,a,i,o,u,c,s,l,f)}catch(t){if(St(p),t!==t+0)throw t;Ot(1,0)}},$:function(t,e,n,r,a,i){var o=At();try{kt(t,e,n,r,a,i)}catch(t){if(St(o),t!==t+0)throw t;Ot(1,0)}},n:function(t){return t},F:function(t){ft=t},ha:yt,y:function(t,e,n,r){return yt(t,e,n,r)}};!function(){function t(t){e.asm=t.exports,O=e.asm.Ka,P(),U=e.asm.ib,I.unshift(e.asm.La),B--,e.monitorRunDependencies&&e.monitorRunDependencies(B),0==B&&(null!==G&&(clearInterval(G),G=null),N&&(t=N,N=null,t()))}function n(e){t(e.instance)}function r(t){return function(){if(!g&&(d||y)){if("function"==typeof fetch&&!Y.startsWith("file://"))return fetch(Y,{credentials:"same-origin"}).then((function(t){if(!t.ok)throw"failed to load wasm binary file at \'"+Y+"\'";return t.arrayBuffer()})).catch((function(){return X()}));if(o)return new Promise((function(t,e){o(Y,(function(e){t(new Uint8Array(e))}),e)}))}return Promise.resolve().then((function(){return X()}))}().then((function(t){return WebAssembly.instantiate(t,i)})).then((function(t){return t})).then(t,(function(t){w("failed to asynchronously prepare wasm: "+t),V(t)}))}var i={a:bt};if(B++,e.monitorRunDependencies&&e.monitorRunDependencies(B),e.instantiateWasm)try{return e.instantiateWasm(i,t)}catch(t){return w("Module.instantiateWasm callback failed with error: "+t),!1}(g||"function"!=typeof WebAssembly.instantiateStreaming||$()||Y.startsWith("file://")||b||"function"!=typeof fetch?r(n):fetch(Y,{credentials:"same-origin"}).then((function(t){return WebAssembly.instantiateStreaming(t,i).then(n,(function(t){return w("wasm streaming compile failed: "+t),w("falling back to ArrayBuffer instantiation"),r(n)}))}))).catch(a)}(),e.___wasm_call_ctors=function(){return(e.___wasm_call_ctors=e.asm.La).apply(null,arguments)},e._OrtInit=function(){return(e._OrtInit=e.asm.Ma).apply(null,arguments)},e._OrtCreateSessionOptions=function(){return(e._OrtCreateSessionOptions=e.asm.Na).apply(null,arguments)},e._OrtAppendExecutionProvider=function(){return(e._OrtAppendExecutionProvider=e.asm.Oa).apply(null,arguments)},e._OrtAddSessionConfigEntry=function(){return(e._OrtAddSessionConfigEntry=e.asm.Pa).apply(null,arguments)},e._OrtReleaseSessionOptions=function(){return(e._OrtReleaseSessionOptions=e.asm.Qa).apply(null,arguments)},e._OrtCreateSession=function(){return(e._OrtCreateSession=e.asm.Ra).apply(null,arguments)},e._OrtReleaseSession=function(){return(e._OrtReleaseSession=e.asm.Sa).apply(null,arguments)},e._OrtGetInputCount=function(){return(e._OrtGetInputCount=e.asm.Ta).apply(null,arguments)},e._OrtGetOutputCount=function(){return(e._OrtGetOutputCount=e.asm.Ua).apply(null,arguments)},e._OrtGetInputName=function(){return(e._OrtGetInputName=e.asm.Va).apply(null,arguments)},e._OrtGetOutputName=function(){return(e._OrtGetOutputName=e.asm.Wa).apply(null,arguments)},e._OrtFree=function(){return(e._OrtFree=e.asm.Xa).apply(null,arguments)},e._OrtCreateTensor=function(){return(e._OrtCreateTensor=e.asm.Ya).apply(null,arguments)},e._OrtGetTensorData=function(){return(e._OrtGetTensorData=e.asm.Za).apply(null,arguments)},e._OrtReleaseTensor=function(){return(e._OrtReleaseTensor=e.asm._a).apply(null,arguments)},e._OrtCreateRunOptions=function(){return(e._OrtCreateRunOptions=e.asm.$a).apply(null,arguments)},e._OrtAddRunConfigEntry=function(){return(e._OrtAddRunConfigEntry=e.asm.ab).apply(null,arguments)},e._OrtReleaseRunOptions=function(){return(e._OrtReleaseRunOptions=e.asm.bb).apply(null,arguments)},e._OrtRun=function(){return(e._OrtRun=e.asm.cb).apply(null,arguments)},e._OrtEndProfiling=function(){return(e._OrtEndProfiling=e.asm.db).apply(null,arguments)};var mt,gt=e._malloc=function(){return(gt=e._malloc=e.asm.eb).apply(null,arguments)},vt=e._free=function(){return(vt=e._free=e.asm.fb).apply(null,arguments)},wt=e._fflush=function(){return(wt=e._fflush=e.asm.gb).apply(null,arguments)},_t=e.___funcs_on_exit=function(){return(_t=e.___funcs_on_exit=e.asm.hb).apply(null,arguments)},Ot=e._setThrew=function(){return(Ot=e._setThrew=e.asm.jb).apply(null,arguments)},At=e.stackSave=function(){return(At=e.stackSave=e.asm.kb).apply(null,arguments)},St=e.stackRestore=function(){return(St=e.stackRestore=e.asm.lb).apply(null,arguments)},Tt=e.stackAlloc=function(){return(Tt=e.stackAlloc=e.asm.mb).apply(null,arguments)},Et=e.___cxa_can_catch=function(){return(Et=e.___cxa_can_catch=e.asm.nb).apply(null,arguments)},Mt=e.___cxa_is_pointer_type=function(){return(Mt=e.___cxa_is_pointer_type=e.asm.ob).apply(null,arguments)},Ct=e.dynCall_j=function(){return(Ct=e.dynCall_j=e.asm.pb).apply(null,arguments)},xt=e.dynCall_iiiiij=function(){return(xt=e.dynCall_iiiiij=e.asm.qb).apply(null,arguments)},Rt=e.dynCall_jii=function(){return(Rt=e.dynCall_jii=e.asm.rb).apply(null,arguments)},jt=e.dynCall_viiiiij=function(){return(jt=e.dynCall_viiiiij=e.asm.sb).apply(null,arguments)},kt=e.dynCall_vjji=function(){return(kt=e.dynCall_vjji=e.asm.tb).apply(null,arguments)},Dt=e.dynCall_viiijjjii=function(){return(Dt=e.dynCall_viiijjjii=e.asm.ub).apply(null,arguments)},Pt=e.dynCall_iij=function(){return(Pt=e.dynCall_iij=e.asm.vb).apply(null,arguments)},Ut=e.dynCall_ji=function(){return(Ut=e.dynCall_ji=e.asm.wb).apply(null,arguments)},Ft=e.dynCall_iiiiiij=function(){return(Ft=e.dynCall_iiiiiij=e.asm.xb).apply(null,arguments)},It=e.dynCall_iiij=function(){return(It=e.dynCall_iiij=e.asm.yb).apply(null,arguments)};function Wt(){function t(){if(!mt&&(mt=!0,e.calledRun=!0,!C)){if(Z(I),r(e),e.onRuntimeInitialized&&e.onRuntimeInitialized(),e.postRun)for("function"==typeof e.postRun&&(e.postRun=[e.postRun]);e.postRun.length;){var t=e.postRun.shift();H.unshift(t)}Z(H)}}if(!(0{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.iterateExtraOptions=void 0,e.iterateExtraOptions=(t,n,r,a)=>{if("object"==typeof t&&null!==t){if(r.has(t))throw new Error("Circular reference in options");r.add(t)}Object.entries(t).forEach((([t,i])=>{const o=n?n+t:t;if("object"==typeof i)(0,e.iterateExtraOptions)(i,o+".",r,a);else if("string"==typeof i||"number"==typeof i)a(o,i.toString());else{if("boolean"!=typeof i)throw new Error("Can\'t handle extra config type: "+typeof i);a(o,i?"1":"0")}}))}},586:(t,e,n)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.setRunOptions=void 0;const r=n(967),a=n(983),i=n(361);e.setRunOptions=t=>{const e=(0,i.getInstance)();let n=0;const o=[],u=t||{};try{if(void 0===(null==t?void 0:t.logSeverityLevel))u.logSeverityLevel=2;else if("number"!=typeof t.logSeverityLevel||!Number.isInteger(t.logSeverityLevel)||t.logSeverityLevel<0||t.logSeverityLevel>4)throw new Error(`log serverity level is not valid: ${t.logSeverityLevel}`);if(void 0===(null==t?void 0:t.logVerbosityLevel))u.logVerbosityLevel=0;else if("number"!=typeof t.logVerbosityLevel||!Number.isInteger(t.logVerbosityLevel))throw new Error(`log verbosity level is not valid: ${t.logVerbosityLevel}`);void 0===(null==t?void 0:t.terminate)&&(u.terminate=!1);let i=0;if(void 0!==(null==t?void 0:t.tag)&&(i=(0,a.allocWasmString)(t.tag,o)),n=e._OrtCreateRunOptions(u.logSeverityLevel,u.logVerbosityLevel,!!u.terminate,i),0===n)throw new Error("Can\'t create run options");return void 0!==(null==t?void 0:t.extra)&&(0,r.iterateExtraOptions)(t.extra,"",new WeakSet,((t,r)=>{const i=(0,a.allocWasmString)(t,o),u=(0,a.allocWasmString)(r,o);if(0!==e._OrtAddRunConfigEntry(n,i,u))throw new Error(`Can\'t set a run config entry: ${t} - ${r}`)})),[n,o]}catch(t){throw 0!==n&&e._OrtReleaseRunOptions(n),o.forEach(e._free),t}}},919:(t,e,n)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.setSessionOptions=void 0;const r=n(967),a=n(983),i=n(361);e.setSessionOptions=t=>{const e=(0,i.getInstance)();let n=0;const o=[],u=t||{};(t=>{t.extra||(t.extra={}),t.extra.session||(t.extra.session={});const e=t.extra.session;e.use_ort_model_bytes_directly||(e.use_ort_model_bytes_directly="1")})(u);try{void 0===(null==t?void 0:t.graphOptimizationLevel)&&(u.graphOptimizationLevel="all");const c=(t=>{switch(t){case"disabled":return 0;case"basic":return 1;case"extended":return 2;case"all":return 99;default:throw new Error(`unsupported graph optimization level: ${t}`)}})(u.graphOptimizationLevel);void 0===(null==t?void 0:t.enableCpuMemArena)&&(u.enableCpuMemArena=!0),void 0===(null==t?void 0:t.enableMemPattern)&&(u.enableMemPattern=!0),void 0===(null==t?void 0:t.executionMode)&&(u.executionMode="sequential");const s=(t=>{switch(t){case"sequential":return 0;case"parallel":return 1;default:throw new Error(`unsupported execution mode: ${t}`)}})(u.executionMode);let l=0;if(void 0!==(null==t?void 0:t.logId)&&(l=(0,a.allocWasmString)(t.logId,o)),void 0===(null==t?void 0:t.logSeverityLevel))u.logSeverityLevel=2;else if("number"!=typeof t.logSeverityLevel||!Number.isInteger(t.logSeverityLevel)||t.logSeverityLevel<0||t.logSeverityLevel>4)throw new Error(`log serverity level is not valid: ${t.logSeverityLevel}`);if(void 0===(null==t?void 0:t.logVerbosityLevel))u.logVerbosityLevel=0;else if("number"!=typeof t.logVerbosityLevel||!Number.isInteger(t.logVerbosityLevel))throw new Error(`log verbosity level is not valid: ${t.logVerbosityLevel}`);if(void 0===(null==t?void 0:t.enableProfiling)&&(u.enableProfiling=!1),n=e._OrtCreateSessionOptions(c,!!u.enableCpuMemArena,!!u.enableMemPattern,s,!!u.enableProfiling,0,l,u.logSeverityLevel,u.logVerbosityLevel),0===n)throw new Error("Can\'t create session options");return(null==t?void 0:t.executionProviders)&&((t,e,n)=>{for(const r of e){let e="string"==typeof r?r:r.name;switch(e){case"xnnpack":e="XNNPACK";break;case"wasm":case"cpu":continue;default:throw new Error(`not supported EP: ${e}`)}const o=(0,a.allocWasmString)(e,n);if(0!==(0,i.getInstance)()._OrtAppendExecutionProvider(t,o))throw new Error(`Can\'t append execution provider: ${e}`)}})(n,t.executionProviders,o),void 0!==(null==t?void 0:t.extra)&&(0,r.iterateExtraOptions)(t.extra,"",new WeakSet,((t,r)=>{const i=(0,a.allocWasmString)(t,o),u=(0,a.allocWasmString)(r,o);if(0!==e._OrtAddSessionConfigEntry(n,i,u))throw new Error(`Can\'t set a session config entry: ${t} - ${r}`)})),[n,o]}catch(t){throw 0!==n&&e._OrtReleaseSessionOptions(n),o.forEach(e._free),t}}},983:(t,e,n)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.allocWasmString=void 0;const r=n(361);e.allocWasmString=(t,e)=>{const n=(0,r.getInstance)(),a=n.lengthBytesUTF8(t)+1,i=n._malloc(a);return n.stringToUTF8(t,i,a),e.push(i),i}},349:(t,e,n)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.extractTransferableBuffers=e.endProfiling=e.run=e.releaseSession=e.createSession=e.createSessionFinalize=e.createSessionAllocate=e.initOrt=void 0;const r=n(586),a=n(919),i=n(983),o=n(361);e.initOrt=(t,e)=>{const n=(0,o.getInstance)()._OrtInit(t,e);if(0!==n)throw new Error(`Can\'t initialize onnxruntime. error code = ${n}`)};const u=new Map;e.createSessionAllocate=t=>{const e=(0,o.getInstance)(),n=e._malloc(t.byteLength);return e.HEAPU8.set(t,n),[n,t.byteLength]},e.createSessionFinalize=(t,e)=>{const n=(0,o.getInstance)();let r=0,i=0,c=[];try{if([i,c]=(0,a.setSessionOptions)(e),r=n._OrtCreateSession(t[0],t[1],i),0===r)throw new Error("Can\'t create a session")}finally{n._free(t[0]),n._OrtReleaseSessionOptions(i),c.forEach(n._free)}const s=n._OrtGetInputCount(r),l=n._OrtGetOutputCount(r),f=[],p=[],h=[],d=[];for(let t=0;t{const r=(0,e.createSessionAllocate)(t);return(0,e.createSessionFinalize)(r,n)},e.releaseSession=t=>{const e=(0,o.getInstance)(),n=u.get(t);if(!n)throw new Error("invalid session id");const r=n[0],a=n[1],i=n[2];a.forEach(e._OrtFree),i.forEach(e._OrtFree),e._OrtReleaseSession(r),u.delete(t)};const c=t=>{switch(t){case"int8":return 3;case"uint8":return 2;case"bool":return 9;case"int16":return 5;case"uint16":return 4;case"int32":return 6;case"uint32":return 12;case"float32":return 1;case"float64":return 11;case"string":return 8;case"int64":return 7;case"uint64":return 13;default:throw new Error(`unsupported data type: ${t}`)}},s=t=>{switch(t){case 3:return"int8";case 2:return"uint8";case 9:return"bool";case 5:return"int16";case 4:return"uint16";case 6:return"int32";case 12:return"uint32";case 1:return"float32";case 11:return"float64";case 8:return"string";case 7:return"int64";case 13:return"uint64";default:throw new Error(`unsupported data type: ${t}`)}},l=t=>{switch(t){case"float32":return Float32Array;case"uint8":case"bool":return Uint8Array;case"int8":return Int8Array;case"uint16":return Uint16Array;case"int16":return Int16Array;case"int32":return Int32Array;case"float64":return Float64Array;case"uint32":return Uint32Array;case"int64":return BigInt64Array;case"uint64":return BigUint64Array;default:throw new Error(`unsupported type: ${t}`)}};e.run=(t,e,n,a,f)=>{const p=(0,o.getInstance)(),h=u.get(t);if(!h)throw new Error("invalid session id");const d=h[0],y=h[1],b=h[2],m=e.length,g=a.length;let v=0,w=[];const _=[],O=[];try{[v,w]=(0,r.setRunOptions)(f);for(let t=0;tp.HEAP32[t++]=e));const n=p._OrtCreateTensor(c(e),o,u,l,r.length);if(0===n)throw new Error("Can\'t create a tensor");_.push(n)}finally{p.stackRestore(s)}}const t=p.stackSave(),o=p.stackAlloc(4*m),u=p.stackAlloc(4*m),h=p.stackAlloc(4*g),A=p.stackAlloc(4*g);try{let n=o/4,r=u/4,i=h/4,c=A/4;for(let t=0;tt*e));if(a=s(o),"string"===a){const t=[];let e=i/4;for(let n=0;n{const e=(0,o.getInstance)(),n=u.get(t);if(!n)throw new Error("invalid session id");const r=n[0],a=e._OrtEndProfiling(r);if(0===a)throw new Error("Can\'t get an profile file name");e._OrtFree(a)},e.extractTransferableBuffers=t=>{const e=[];for(const n of t){const t=n[2];!Array.isArray(t)&&t.buffer&&e.push(t.buffer)}return e}},361:function(t,e,n){"use strict";var r=this&&this.__createBinding||(Object.create?function(t,e,n,r){void 0===r&&(r=n);var a=Object.getOwnPropertyDescriptor(e,n);a&&!("get"in a?!e.__esModule:a.writable||a.configurable)||(a={enumerable:!0,get:function(){return e[n]}}),Object.defineProperty(t,r,a)}:function(t,e,n,r){void 0===r&&(r=n),t[r]=e[n]}),a=this&&this.__setModuleDefault||(Object.create?function(t,e){Object.defineProperty(t,"default",{enumerable:!0,value:e})}:function(t,e){t.default=e}),i=this&&this.__importStar||function(t){if(t&&t.__esModule)return t;var e={};if(null!=t)for(var n in t)"default"!==n&&Object.prototype.hasOwnProperty.call(t,n)&&r(e,t,n);return a(e,t),e},o=this&&this.__importDefault||function(t){return t&&t.__esModule?t:{default:t}};Object.defineProperty(e,"__esModule",{value:!0}),e.dispose=e.getInstance=e.initializeWebAssembly=void 0;const u=i(n(449)),c=o(n(932)),s=n(474);let l,f=!1,p=!1,h=!1;const d=(t,e)=>e?t?"ort-wasm-simd-threaded.wasm":"ort-wasm-threaded.wasm":t?"ort-wasm-simd.wasm":"ort-wasm.wasm";e.initializeWebAssembly=async t=>{if(f)return Promise.resolve();if(p)throw new Error("multiple calls to \'initializeWebAssembly()\' detected.");if(h)throw new Error("previous call to \'initializeWebAssembly()\' failed.");p=!0;const e=t.initTimeout,r=t.numThreads,a=t.simd,i=r>1&&(()=>{try{return"undefined"!=typeof SharedArrayBuffer&&("undefined"!=typeof MessageChannel&&(new MessageChannel).port1.postMessage(new SharedArrayBuffer(1)),WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,5,4,1,3,1,1,10,11,1,9,0,65,0,254,16,2,0,26,11])))}catch(t){return!1}})(),o=a&&(()=>{try{return WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,10,30,1,28,0,65,0,253,15,253,12,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,253,186,1,26,11]))}catch(t){return!1}})(),y="string"==typeof t.wasmPaths?t.wasmPaths:void 0,b=d(!1,i),m=d(o,i),g="object"==typeof t.wasmPaths?t.wasmPaths[m]:void 0;let v=!1;const w=[];if(e>0&&w.push(new Promise((t=>{setTimeout((()=>{v=!0,t()}),e)}))),w.push(new Promise(((t,e)=>{const r=i?s:c.default,a={locateFile:(t,e)=>i&&t.endsWith(".worker.js")&&"undefined"!=typeof Blob?URL.createObjectURL(new Blob([n(154)],{type:"text/javascript"})):t===b?null!=g?g:(null!=y?y:e)+m:e+t};if(i)if("undefined"==typeof Blob)a.mainScriptUrlOrBlob=u.join("/","ort-wasm-threaded.js");else{const t=`var ortWasmThreaded=(function(){var _scriptDir;return ${r.toString()}})();`;a.mainScriptUrlOrBlob=new Blob([t],{type:"text/javascript"})}r(a).then((e=>{p=!1,f=!0,l=e,t()}),(t=>{p=!1,h=!0,e(t)}))}))),await Promise.race(w),v)throw new Error(`WebAssembly backend initializing failed due to timeout: ${e}ms`)},e.getInstance=()=>{if(f&&l)return l;throw new Error("WebAssembly is not initialized yet.")},e.dispose=()=>{var t;!f||p||h||(p=!0,null===(t=l.PThread)||void 0===t||t.terminateAllThreads(),l=void 0,p=!1,f=!1,h=!0)}},154:t=>{"use strict";t.exports=\'"use strict";var e={},t="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node;if(t){var r=require("worker_threads"),a=r.parentPort;a.on("message",(e=>onmessage({data:e})));var o=require("fs");Object.assign(global,{self:global,require:require,Module:e,location:{href:__filename},Worker:r.Worker,importScripts:function(e){(0,eval)(o.readFileSync(e,"utf8"))},postMessage:function(e){a.postMessage(e)},performance:global.performance||{now:function(){return Date.now()}}})}var s=!1,n=[],i=function(){var e=Array.prototype.slice.call(arguments).join(" ");t?o.writeSync(2,e+"\\\\n"):console.error(e)};self.alert=function(){var t=Array.prototype.slice.call(arguments).join(" ");postMessage({cmd:"alert",text:t,threadId:e._pthread_self()})},e.instantiateWasm=(t,r)=>{var a=new WebAssembly.Instance(e.wasmModule,t);return r(a),e.wasmModule=null,a.exports},self.onunhandledrejection=e=>{throw e.reason??e},self.onmessage=t=>{try{if("load"===t.data.cmd){if(e.wasmModule=t.data.wasmModule,e.wasmMemory=t.data.wasmMemory,e.buffer=e.wasmMemory.buffer,e.ENVIRONMENT_IS_PTHREAD=!0,"string"==typeof t.data.urlOrBlob)importScripts(t.data.urlOrBlob);else{var r=URL.createObjectURL(t.data.urlOrBlob);importScripts(r),URL.revokeObjectURL(r)}ortWasmThreaded(e).then((function(t){e=t}))}else if("run"===t.data.cmd){e.__performance_now_clock_drift=performance.now()-t.data.time,e.__emscripten_thread_init(t.data.pthread_ptr,0,0,1),e.establishStackSpace(),e.PThread.receiveObjectTransfer(t.data),e.PThread.threadInitTLS(),s||(n.forEach((t=>{e.executeNotifiedProxyingQueue(t)})),n=[],s=!0);try{e.invokeEntryPoint(t.data.start_routine,t.data.arg)}catch(t){if("unwind"!=t){if(!(t instanceof e.ExitStatus))throw t;e.keepRuntimeAlive()||e.__emscripten_thread_exit(t.status)}}}else"cancel"===t.data.cmd?e._pthread_self()&&e.__emscripten_thread_exit(-1):"setimmediate"===t.data.target||("processProxyingQueue"===t.data.cmd?s?e.executeNotifiedProxyingQueue(t.data.queue):n.push(t.data.queue):(i("worker.js received unknown command "+t.data.cmd),i(t.data)))}catch(t){throw i("worker.js onmessage() captured an uncaught exception: "+t),t&&t.stack&&i(t.stack),e.__emscripten_thread_crashed&&e.__emscripten_thread_crashed(),t}};\\n\'},384:()=>{},993:()=>{},908:()=>{},953:()=>{},925:()=>{},449:()=>{}},e={};function n(r){var a=e[r];if(void 0!==a)return a.exports;var i=e[r]={exports:{}};return t[r].call(i.exports,i,i.exports,n),i.exports}n.g=function(){if("object"==typeof globalThis)return globalThis;try{return this||new Function("return this")()}catch(t){if("object"==typeof window)return window}}(),(()=>{"use strict";const t=n(349),e=n(361);self.onmessage=n=>{switch(n.data.type){case"init-wasm":(0,e.initializeWebAssembly)(n.data.in).then((()=>postMessage({type:"init-wasm"})),(t=>postMessage({type:"init-wasm",err:t})));break;case"init-ort":try{const{numThreads:e,loggingLevel:r}=n.data.in;(0,t.initOrt)(e,r),postMessage({type:"init-ort"})}catch(t){postMessage({type:"init-ort",err:t})}break;case"create_allocate":try{const{model:e}=n.data.in,r=(0,t.createSessionAllocate)(e);postMessage({type:"create_allocate",out:r})}catch(t){postMessage({type:"create_allocate",err:t})}break;case"create_finalize":try{const{modeldata:e,options:r}=n.data.in,a=(0,t.createSessionFinalize)(e,r);postMessage({type:"create_finalize",out:a})}catch(t){postMessage({type:"create_finalize",err:t})}break;case"create":try{const{model:e,options:r}=n.data.in,a=(0,t.createSession)(e,r);postMessage({type:"create",out:a})}catch(t){postMessage({type:"create",err:t})}break;case"release":try{const e=n.data.in;(0,t.releaseSession)(e),postMessage({type:"release"})}catch(t){postMessage({type:"release",err:t})}break;case"run":try{const{sessionId:e,inputIndices:r,inputs:a,outputIndices:i,options:o}=n.data.in,u=(0,t.run)(e,r,a,i,o);postMessage({type:"run",out:u},(0,t.extractTransferableBuffers)(u))}catch(t){postMessage({type:"run",err:t})}break;case"end-profiling":try{const e=n.data.in;(0,t.endProfiling)(e),postMessage({type:"end-profiling"})}catch(t){postMessage({type:"end-profiling",err:t})}}}})()})();\n',"Worker",void 0,void 0)}},477:tr=>{"use strict";tr.exports=function(tr,tn,ti,to){var ta=self||window;try{try{try{ts=new ta.Blob([tr])}catch(tn){(ts=new(ta.BlobBuilder||ta.WebKitBlobBuilder||ta.MozBlobBuilder||ta.MSBlobBuilder)).append(tr),ts=ts.getBlob()}var ts,tu=ta.URL||ta.webkitURL,tl=tu.createObjectURL(ts),tc=new ta[tn](tl,ti);return tu.revokeObjectURL(tl),tc}catch(to){return new ta[tn]("data:application/javascript,".concat(encodeURIComponent(tr)),ti)}}catch(tr){if(!to)throw Error("Inline worker is not supported");return new ta[tn](to,ti)}}},4154:tr=>{"use strict";tr.exports='"use strict";var e={},t="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node;if(t){var r=require("worker_threads"),a=r.parentPort;a.on("message",(e=>onmessage({data:e})));var o=require("fs");Object.assign(global,{self:global,require:require,Module:e,location:{href:__filename},Worker:r.Worker,importScripts:function(e){(0,eval)(o.readFileSync(e,"utf8"))},postMessage:function(e){a.postMessage(e)},performance:global.performance||{now:function(){return Date.now()}}})}var s=!1,n=[],i=function(){var e=Array.prototype.slice.call(arguments).join(" ");t?o.writeSync(2,e+"\\n"):console.error(e)};self.alert=function(){var t=Array.prototype.slice.call(arguments).join(" ");postMessage({cmd:"alert",text:t,threadId:e._pthread_self()})},e.instantiateWasm=(t,r)=>{var a=new WebAssembly.Instance(e.wasmModule,t);return r(a),e.wasmModule=null,a.exports},self.onunhandledrejection=e=>{throw e.reason??e},self.onmessage=t=>{try{if("load"===t.data.cmd){if(e.wasmModule=t.data.wasmModule,e.wasmMemory=t.data.wasmMemory,e.buffer=e.wasmMemory.buffer,e.ENVIRONMENT_IS_PTHREAD=!0,"string"==typeof t.data.urlOrBlob)importScripts(t.data.urlOrBlob);else{var r=URL.createObjectURL(t.data.urlOrBlob);importScripts(r),URL.revokeObjectURL(r)}ortWasmThreaded(e).then((function(t){e=t}))}else if("run"===t.data.cmd){e.__performance_now_clock_drift=performance.now()-t.data.time,e.__emscripten_thread_init(t.data.pthread_ptr,0,0,1),e.establishStackSpace(),e.PThread.receiveObjectTransfer(t.data),e.PThread.threadInitTLS(),s||(n.forEach((t=>{e.executeNotifiedProxyingQueue(t)})),n=[],s=!0);try{e.invokeEntryPoint(t.data.start_routine,t.data.arg)}catch(t){if("unwind"!=t){if(!(t instanceof e.ExitStatus))throw t;e.keepRuntimeAlive()||e.__emscripten_thread_exit(t.status)}}}else"cancel"===t.data.cmd?e._pthread_self()&&e.__emscripten_thread_exit(-1):"setimmediate"===t.data.target||("processProxyingQueue"===t.data.cmd?s?e.executeNotifiedProxyingQueue(t.data.queue):n.push(t.data.queue):(i("worker.js received unknown command "+t.data.cmd),i(t.data)))}catch(t){throw i("worker.js onmessage() captured an uncaught exception: "+t),t&&t.stack&&i(t.stack),e.__emscripten_thread_crashed&&e.__emscripten_thread_crashed(),t}};\n'},1670:tr=>{"use strict";tr.exports=__WEBPACK_EXTERNAL_MODULE__1670__},7067:()=>{},1296:()=>{},1384:()=>{},3993:()=>{},908:()=>{},6953:()=>{},9925:()=>{},2806:()=>{},6449:()=>{},2850:()=>{},5381:()=>{},5686:(tr,tn,ti)=>{"use strict";ti.r(tn),ti.d(tn,{flatbuffers:()=>to});var to={};to.Offset,to.Table,to.SIZEOF_SHORT=2,to.SIZEOF_INT=4,to.FILE_IDENTIFIER_LENGTH=4,to.SIZE_PREFIX_LENGTH=4,to.Encoding={UTF8_BYTES:1,UTF16_STRING:2},to.int32=new Int32Array(2),to.float32=new Float32Array(to.int32.buffer),to.float64=new Float64Array(to.int32.buffer),to.isLittleEndian=1===new Uint16Array(new Uint8Array([1,0]).buffer)[0],to.Long=function(tr,tn){this.low=0|tr,this.high=0|tn},to.Long.create=function(tr,tn){return 0==tr&&0==tn?to.Long.ZERO:new to.Long(tr,tn)},to.Long.prototype.toFloat64=function(){return(this.low>>>0)+4294967296*this.high},to.Long.prototype.equals=function(tr){return this.low==tr.low&&this.high==tr.high},to.Long.ZERO=new to.Long(0,0),to.Builder=function(tr){if(tr)tn=tr;else var tn=1024;this.bb=to.ByteBuffer.allocate(tn),this.space=tn,this.minalign=1,this.vtable=null,this.vtable_in_use=0,this.isNested=!1,this.object_start=0,this.vtables=[],this.vector_num_elems=0,this.force_defaults=!1},to.Builder.prototype.clear=function(){this.bb.clear(),this.space=this.bb.capacity(),this.minalign=1,this.vtable=null,this.vtable_in_use=0,this.isNested=!1,this.object_start=0,this.vtables=[],this.vector_num_elems=0,this.force_defaults=!1},to.Builder.prototype.forceDefaults=function(tr){this.force_defaults=tr},to.Builder.prototype.dataBuffer=function(){return this.bb},to.Builder.prototype.asUint8Array=function(){return this.bb.bytes().subarray(this.bb.position(),this.bb.position()+this.offset())},to.Builder.prototype.prep=function(tr,tn){tr>this.minalign&&(this.minalign=tr);for(var ti=1+~(this.bb.capacity()-this.space+tn)&tr-1;this.space=0&&0==this.vtable[tn];tn--);for(var ti=tn+1;tn>=0;tn--)this.addInt16(0!=this.vtable[tn]?tr-this.vtable[tn]:0);this.addInt16(tr-this.object_start);var ta=(ti+2)*to.SIZEOF_SHORT;this.addInt16(ta);var ts=0,tu=this.space;t:for(tn=0;tn=0;tu--)this.writeInt8(ts.charCodeAt(tu))}this.prep(this.minalign,to.SIZEOF_INT+ta),this.addOffset(tr),ta&&this.addInt32(this.bb.capacity()-this.space),this.bb.setPosition(this.space)},to.Builder.prototype.finishSizePrefixed=function(tr,tn){this.finish(tr,tn,!0)},to.Builder.prototype.requiredField=function(tr,tn){var ti=this.bb.capacity()-tr,to=ti-this.bb.readInt32(ti);if(0==this.bb.readInt16(to+tn))throw Error("FlatBuffers: field "+tn+" must be set")},to.Builder.prototype.startVector=function(tr,tn,ti){this.notNested(),this.vector_num_elems=tn,this.prep(to.SIZEOF_INT,tr*tn),this.prep(ti,tr*tn)},to.Builder.prototype.endVector=function(){return this.writeInt32(this.vector_num_elems),this.offset()},to.Builder.prototype.createString=function(tr){if(tr instanceof Uint8Array)var tn=tr;else{tn=[];for(var ti=0;ti=56320?ta:(ta<<10)+tr.charCodeAt(ti++)+-56613888)<128?tn.push(to):(to<2048?tn.push(to>>6&31|192):(to<65536?tn.push(to>>12&15|224):tn.push(to>>18&7|240,to>>12&63|128),tn.push(to>>6&63|128)),tn.push(63&to|128))}}this.addInt8(0),this.startVector(1,tn.length,1),this.bb.setPosition(this.space-=tn.length),ti=0;for(var ts=this.space,tu=this.bb.bytes();ti>24},to.ByteBuffer.prototype.readUint8=function(tr){return this.bytes_[tr]},to.ByteBuffer.prototype.readInt16=function(tr){return this.readUint16(tr)<<16>>16},to.ByteBuffer.prototype.readUint16=function(tr){return this.bytes_[tr]|this.bytes_[tr+1]<<8},to.ByteBuffer.prototype.readInt32=function(tr){return this.bytes_[tr]|this.bytes_[tr+1]<<8|this.bytes_[tr+2]<<16|this.bytes_[tr+3]<<24},to.ByteBuffer.prototype.readUint32=function(tr){return this.readInt32(tr)>>>0},to.ByteBuffer.prototype.readInt64=function(tr){return new to.Long(this.readInt32(tr),this.readInt32(tr+4))},to.ByteBuffer.prototype.readUint64=function(tr){return new to.Long(this.readUint32(tr),this.readUint32(tr+4))},to.ByteBuffer.prototype.readFloat32=function(tr){return to.int32[0]=this.readInt32(tr),to.float32[0]},to.ByteBuffer.prototype.readFloat64=function(tr){return to.int32[to.isLittleEndian?0:1]=this.readInt32(tr),to.int32[to.isLittleEndian?1:0]=this.readInt32(tr+4),to.float64[0]},to.ByteBuffer.prototype.writeInt8=function(tr,tn){this.bytes_[tr]=tn},to.ByteBuffer.prototype.writeUint8=function(tr,tn){this.bytes_[tr]=tn},to.ByteBuffer.prototype.writeInt16=function(tr,tn){this.bytes_[tr]=tn,this.bytes_[tr+1]=tn>>8},to.ByteBuffer.prototype.writeUint16=function(tr,tn){this.bytes_[tr]=tn,this.bytes_[tr+1]=tn>>8},to.ByteBuffer.prototype.writeInt32=function(tr,tn){this.bytes_[tr]=tn,this.bytes_[tr+1]=tn>>8,this.bytes_[tr+2]=tn>>16,this.bytes_[tr+3]=tn>>24},to.ByteBuffer.prototype.writeUint32=function(tr,tn){this.bytes_[tr]=tn,this.bytes_[tr+1]=tn>>8,this.bytes_[tr+2]=tn>>16,this.bytes_[tr+3]=tn>>24},to.ByteBuffer.prototype.writeInt64=function(tr,tn){this.writeInt32(tr,tn.low),this.writeInt32(tr+4,tn.high)},to.ByteBuffer.prototype.writeUint64=function(tr,tn){this.writeUint32(tr,tn.low),this.writeUint32(tr+4,tn.high)},to.ByteBuffer.prototype.writeFloat32=function(tr,tn){to.float32[0]=tn,this.writeInt32(tr,to.int32[0])},to.ByteBuffer.prototype.writeFloat64=function(tr,tn){to.float64[0]=tn,this.writeInt32(tr,to.int32[to.isLittleEndian?0:1]),this.writeInt32(tr+4,to.int32[to.isLittleEndian?1:0])},to.ByteBuffer.prototype.getBufferIdentifier=function(){if(this.bytes_.length>10),56320+(1023&tu)))}return ta},to.ByteBuffer.prototype.__indirect=function(tr){return tr+this.readInt32(tr)},to.ByteBuffer.prototype.__vector=function(tr){return tr+this.readInt32(tr)+to.SIZEOF_INT},to.ByteBuffer.prototype.__vector_len=function(tr){return this.readInt32(tr+this.readInt32(tr))},to.ByteBuffer.prototype.__has_identifier=function(tr){if(tr.length!=to.FILE_IDENTIFIER_LENGTH)throw Error("FlatBuffers: file identifier must be length "+to.FILE_IDENTIFIER_LENGTH);for(var tn=0;tn{var tn=tr&&tr.__esModule?()=>tr.default:()=>tr;return __nested_webpack_require_546802__.d(tn,{a:tn}),tn},__nested_webpack_require_546802__.d=(tr,tn)=>{for(var ti in tn)__nested_webpack_require_546802__.o(tn,ti)&&!__nested_webpack_require_546802__.o(tr,ti)&&Object.defineProperty(tr,ti,{enumerable:!0,get:tn[ti]})},__nested_webpack_require_546802__.g=function(){if("object"==typeof globalThis)return globalThis;try{return this||Function("return this")()}catch(tr){if("object"==typeof window)return window}}(),__nested_webpack_require_546802__.o=(tr,tn)=>Object.prototype.hasOwnProperty.call(tr,tn),__nested_webpack_require_546802__.r=tr=>{"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(tr,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(tr,"__esModule",{value:!0})};var __nested_webpack_exports__=__nested_webpack_require_546802__(6018);return __nested_webpack_exports__})())}}]); \ No newline at end of file diff --git a/spaces/Xenova/the-tokenizer-playground/style.css b/spaces/Xenova/the-tokenizer-playground/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/Xenova/the-tokenizer-playground/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/XzJosh/Azuma-Bert-VITS2/text/tone_sandhi.py b/spaces/XzJosh/Azuma-Bert-VITS2/text/tone_sandhi.py deleted file mode 100644 index 0f45b7a72c5d858bcaab19ac85cfa686bf9a74da..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Azuma-Bert-VITS2/text/tone_sandhi.py +++ /dev/null @@ -1,351 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import List -from typing import Tuple - -import jieba -from pypinyin import lazy_pinyin -from pypinyin import Style - - -class ToneSandhi(): - def __init__(self): - self.must_neural_tone_words = { - '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝', - '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊', - '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去', - '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号', - '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当', - '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻', - '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂', - '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆', - '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂', - '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿', - '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台', - '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算', - '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨', - '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快', - '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜', - '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔', - '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事', - '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾', - '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼', - '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实', - '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头', - '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼', - '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数', - '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气', - '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈', - '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方', - '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴', - '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦', - '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝', - '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹', - '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息', - '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤', - '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家', - '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故', - '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨', - '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅', - '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱', - '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱', - '扫把', '惦记' - } - self.must_not_neural_tone_words = { - "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎" - } - self.punc = ":,;。?!“”‘’':,;.?!" - - # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041 - # e.g. - # word: "家里" - # pos: "s" - # finals: ['ia1', 'i3'] - def _neural_sandhi(self, word: str, pos: str, - finals: List[str]) -> List[str]: - - # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺 - for j, item in enumerate(word): - if j - 1 >= 0 and item == word[j - 1] and pos[0] in { - "n", "v", "a" - } and word not in self.must_not_neural_tone_words: - finals[j] = finals[j][:-1] + "5" - ge_idx = word.find("个") - if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶": - finals[-1] = finals[-1][:-1] + "5" - elif len(word) >= 1 and word[-1] in "的地得": - finals[-1] = finals[-1][:-1] + "5" - # e.g. 走了, 看着, 去过 - # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}: - # finals[-1] = finals[-1][:-1] + "5" - elif len(word) > 1 and word[-1] in "们子" and pos in { - "r", "n" - } and word not in self.must_not_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 桌上, 地下, 家里 - elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 上来, 下去 - elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开": - finals[-1] = finals[-1][:-1] + "5" - # 个做量词 - elif (ge_idx >= 1 and - (word[ge_idx - 1].isnumeric() or - word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个': - finals[ge_idx] = finals[ge_idx][:-1] + "5" - else: - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - - word_list = self._split_word(word) - finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]] - for i, word in enumerate(word_list): - # conventional neural in Chinese - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals_list[i][-1] = finals_list[i][-1][:-1] + "5" - finals = sum(finals_list, []) - return finals - - def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]: - # e.g. 看不懂 - if len(word) == 3 and word[1] == "不": - finals[1] = finals[1][:-1] + "5" - else: - for i, char in enumerate(word): - # "不" before tone4 should be bu2, e.g. 不怕 - if char == "不" and i + 1 < len(word) and finals[i + - 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - return finals - - def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]: - # "一" in number sequences, e.g. 一零零, 二一零 - if word.find("一") != -1 and all( - [item.isnumeric() for item in word if item != "一"]): - return finals - # "一" between reduplication words shold be yi5, e.g. 看一看 - elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]: - finals[1] = finals[1][:-1] + "5" - # when "一" is ordinal word, it should be yi1 - elif word.startswith("第一"): - finals[1] = finals[1][:-1] + "1" - else: - for i, char in enumerate(word): - if char == "一" and i + 1 < len(word): - # "一" before tone4 should be yi2, e.g. 一段 - if finals[i + 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - # "一" before non-tone4 should be yi4, e.g. 一天 - else: - # "一" 后面如果是标点,还读一声 - if word[i + 1] not in self.punc: - finals[i] = finals[i][:-1] + "4" - return finals - - def _split_word(self, word: str) -> List[str]: - word_list = jieba.cut_for_search(word) - word_list = sorted(word_list, key=lambda i: len(i), reverse=False) - first_subword = word_list[0] - first_begin_idx = word.find(first_subword) - if first_begin_idx == 0: - second_subword = word[len(first_subword):] - new_word_list = [first_subword, second_subword] - else: - second_subword = word[:-len(first_subword)] - new_word_list = [second_subword, first_subword] - return new_word_list - - def _three_sandhi(self, word: str, finals: List[str]) -> List[str]: - if len(word) == 2 and self._all_tone_three(finals): - finals[0] = finals[0][:-1] + "2" - elif len(word) == 3: - word_list = self._split_word(word) - if self._all_tone_three(finals): - # disyllabic + monosyllabic, e.g. 蒙古/包 - if len(word_list[0]) == 2: - finals[0] = finals[0][:-1] + "2" - finals[1] = finals[1][:-1] + "2" - # monosyllabic + disyllabic, e.g. 纸/老虎 - elif len(word_list[0]) == 1: - finals[1] = finals[1][:-1] + "2" - else: - finals_list = [ - finals[:len(word_list[0])], finals[len(word_list[0]):] - ] - if len(finals_list) == 2: - for i, sub in enumerate(finals_list): - # e.g. 所有/人 - if self._all_tone_three(sub) and len(sub) == 2: - finals_list[i][0] = finals_list[i][0][:-1] + "2" - # e.g. 好/喜欢 - elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \ - finals_list[0][-1][-1] == "3": - - finals_list[0][-1] = finals_list[0][-1][:-1] + "2" - finals = sum(finals_list, []) - # split idiom into two words who's length is 2 - elif len(word) == 4: - finals_list = [finals[:2], finals[2:]] - finals = [] - for sub in finals_list: - if self._all_tone_three(sub): - sub[0] = sub[0][:-1] + "2" - finals += sub - - return finals - - def _all_tone_three(self, finals: List[str]) -> bool: - return all(x[-1] == "3" for x in finals) - - # merge "不" and the word behind it - # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error - def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - last_word = "" - for word, pos in seg: - if last_word == "不": - word = last_word + word - if word != "不": - new_seg.append((word, pos)) - last_word = word[:] - if last_word == "不": - new_seg.append((last_word, 'd')) - last_word = "" - return new_seg - - # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听" - # function 2: merge single "一" and the word behind it - # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error - # e.g. - # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')] - # output seg: [['听一听', 'v']] - def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - # function 1 - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][ - 0] == seg[i + 1][0] and seg[i - 1][1] == "v": - new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0] - else: - if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][ - 0] == word and pos == "v": - continue - else: - new_seg.append([word, pos]) - seg = new_seg - new_seg = [] - # function 2 - for i, (word, pos) in enumerate(seg): - if new_seg and new_seg[-1][0] == "一": - new_seg[-1][0] = new_seg[-1][0] + word - else: - new_seg.append([word, pos]) - return new_seg - - # the first and the second words are all_tone_three - def _merge_continuous_three_tones( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and self._all_tone_three( - sub_finals_list[i - 1]) and self._all_tone_three( - sub_finals_list[i]) and not merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - - return new_seg - - def _is_reduplication(self, word: str) -> bool: - return len(word) == 2 and word[0] == word[1] - - # the last char of first word and the first char of second word is tone_three - def _merge_continuous_three_tones_2( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \ - merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "儿" and seg[i-1][0] != "#": - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_reduplication( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if new_seg and word == new_seg[-1][0]: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def pre_merge_for_modify( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - seg = self._merge_bu(seg) - try: - seg = self._merge_yi(seg) - except: - print("_merge_yi failed") - seg = self._merge_reduplication(seg) - seg = self._merge_continuous_three_tones(seg) - seg = self._merge_continuous_three_tones_2(seg) - seg = self._merge_er(seg) - return seg - - def modified_tone(self, word: str, pos: str, - finals: List[str]) -> List[str]: - finals = self._bu_sandhi(word, finals) - finals = self._yi_sandhi(word, finals) - finals = self._neural_sandhi(word, pos, finals) - finals = self._three_sandhi(word, finals) - return finals diff --git a/spaces/XzJosh/Jiaran-Bert-VITS2/text/symbols.py b/spaces/XzJosh/Jiaran-Bert-VITS2/text/symbols.py deleted file mode 100644 index 9dfae4e633829f20c4fd767b1c7a9198911ed801..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Jiaran-Bert-VITS2/text/symbols.py +++ /dev/null @@ -1,51 +0,0 @@ -punctuation = ['!', '?', '…', ",", ".", "'", '-'] -pu_symbols = punctuation + ["SP", "UNK"] -pad = '_' - -# chinese -zh_symbols = ['E', 'En', 'a', 'ai', 'an', 'ang', 'ao', 'b', 'c', 'ch', 'd', 'e', 'ei', 'en', 'eng', 'er', 'f', 'g', 'h', - 'i', 'i0', 'ia', 'ian', 'iang', 'iao', 'ie', 'in', 'ing', 'iong', 'ir', 'iu', 'j', 'k', 'l', 'm', 'n', 'o', - 'ong', - 'ou', 'p', 'q', 'r', 's', 'sh', 't', 'u', 'ua', 'uai', 'uan', 'uang', 'ui', 'un', 'uo', 'v', 'van', 've', 'vn', - 'w', 'x', 'y', 'z', 'zh', - "AA", "EE", "OO"] -num_zh_tones = 6 - -# japanese -ja_symbols = ['I', 'N', 'U', 'a', 'b', 'by', 'ch', 'cl', 'd', 'dy', 'e', 'f', 'g', 'gy', 'h', 'hy', 'i', 'j', 'k', 'ky', - 'm', 'my', 'n', 'ny', 'o', 'p', 'py', 'r', 'ry', 's', 'sh', 't', 'ts', 'u', 'V', 'w', 'y', 'z'] -num_ja_tones = 1 - -# English -en_symbols = ['aa', 'ae', 'ah', 'ao', 'aw', 'ay', 'b', 'ch', 'd', 'dh', 'eh', 'er', 'ey', 'f', 'g', 'hh', 'ih', 'iy', - 'jh', 'k', 'l', 'm', 'n', 'ng', 'ow', 'oy', 'p', 'r', 's', - 'sh', 't', 'th', 'uh', 'uw', 'V', 'w', 'y', 'z', 'zh'] -num_en_tones = 4 - -# combine all symbols -normal_symbols = sorted(set(zh_symbols + ja_symbols + en_symbols)) -symbols = [pad] + normal_symbols + pu_symbols -sil_phonemes_ids = [symbols.index(i) for i in pu_symbols] - -# combine all tones -num_tones = num_zh_tones + num_ja_tones + num_en_tones - -# language maps -language_id_map = { - 'ZH': 0, - "JA": 1, - "EN": 2 -} -num_languages = len(language_id_map.keys()) - -language_tone_start_map = { - 'ZH': 0, - "JA": num_zh_tones, - "EN": num_zh_tones + num_ja_tones -} - -if __name__ == '__main__': - a = set(zh_symbols) - b = set(en_symbols) - print(sorted(a&b)) - diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/dependency_versions_table.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/dependency_versions_table.py deleted file mode 100644 index 2fd6bfa1fa17aaeab7adf089e605380ff508a725..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/dependency_versions_table.py +++ /dev/null @@ -1,33 +0,0 @@ -# THIS FILE HAS BEEN AUTOGENERATED. To update: -# 1. modify the `_deps` dict in setup.py -# 2. run `make deps_table_update`` -deps = { - "Pillow": "Pillow", - "accelerate": "accelerate>=0.11.0", - "black": "black==22.8", - "datasets": "datasets", - "filelock": "filelock", - "flake8": "flake8>=3.8.3", - "flax": "flax>=0.4.1", - "hf-doc-builder": "hf-doc-builder>=0.3.0", - "huggingface-hub": "huggingface-hub>=0.10.0", - "importlib_metadata": "importlib_metadata", - "isort": "isort>=5.5.4", - "jax": "jax>=0.2.8,!=0.3.2", - "jaxlib": "jaxlib>=0.1.65", - "modelcards": "modelcards>=0.1.4", - "numpy": "numpy", - "parameterized": "parameterized", - "pytest": "pytest", - "pytest-timeout": "pytest-timeout", - "pytest-xdist": "pytest-xdist", - "safetensors": "safetensors", - "sentencepiece": "sentencepiece>=0.1.91,!=0.1.92", - "scipy": "scipy", - "regex": "regex!=2019.12.17", - "requests": "requests", - "tensorboard": "tensorboard", - "torch": "torch>=1.4", - "torchvision": "torchvision", - "transformers": "transformers>=4.21.0", -} diff --git a/spaces/YlcldKlns/bing/src/components/ui/textarea.tsx b/spaces/YlcldKlns/bing/src/components/ui/textarea.tsx deleted file mode 100644 index e25af722c7a5dc1121a9ab58d6716952f9f76081..0000000000000000000000000000000000000000 --- a/spaces/YlcldKlns/bing/src/components/ui/textarea.tsx +++ /dev/null @@ -1,24 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface TextareaProps - extends React.TextareaHTMLAttributes {} - -const Textarea = React.forwardRef( - ({ className, ...props }, ref) => { - return ( -